ptak_dev

Used Claude Code to build CareerCraft AI - a conversational career consultant that generates tailored resumes. The session analytics angle is interesting because my experience lines up with the "explain the why" pattern.

The sessions where I gave Claude Code context about the problem space (job seekers rewriting resumes 20x) produced dramatically better results than sessions where I just said "build a resume generator." The AI designed a conversational intake flow, live resume preview with real-time updates, and PDF export - things I hadn't explicitly asked for but that emerged from understanding the problem.

Would love to see Rudel break down sessions by "context richness" vs output quality. My gut says the first 60 seconds of context-setting predicts the entire session's productivity.

Built on SuperNinja (no-code AI app platform): https://super.myninja.ai/apps/6de082c7-a05f-4fc5-a7d3-ab56cc...

dmix

I've seen Claude ignore important parts of skills/agent files multiple times. I was running a clean up SKILL.md on a hundred markdown files, manually in small groups of 5, and about half the time it listened and ran the skill as written. The other half it would start trying to understand the codebase looking for markdown stuff for 2min, for no good reason, before reverting back to what the skill said.

LLMs are far from consistent.

show comments
smallerfish

> content, the content or transcript of the agent session

Does this include the files being worked on by the agent in the session, or just the chat transcript?

emehex

For those unaware, Claude Code comes with a built in /insights command...

show comments
Aurornis

> 26% of sessions are abandoned, most within the first 60 seconds

Starting new sessions frequently and using separate new sessions for small tasks is a good practice.

Keeping context clean and focused is a highly effective way to keep the agent on task. Having an up to date AGENTS.md should allow for new sessions to get into simple tasks quickly so you can use single-purpose sessions for small tasks without carrying the baggage of a long past context into them.

show comments
bool3max

Why is the comment calling out the biggest issue with this so heavily downvoted? Privacy is a massive concern with this.

longtermemory

From session analysis, it would be interesting to understand how crucial the documentation, the level of detail in CLAUDE.md, is. It seems to me that sometimes documentation (that's too long and often out of date) contributes to greater entropy rather than greater efficiency of the model and agent.

It seems to me that sometimes it's better and more effective to remove, clean up, and simplify (both from CLAUDE.md and the code) rather than having everything documented in detail.

Therefore, from session analysis, it would be interesting to identify the relationship between documentation in CLAUDE.md and model efficiency. How often does the developer reject the LLM output in relation to the level of detail in CLAUDE.md?

show comments
152334H

is there a reason, other than general faith in humanity, to assume those '1573 sessions' are real?

I do not see any link or source for the data. I assume it is to remain closed, if it exists.

show comments
ericwebb

I 100% agree that we need tools to understand and audit these workflows for opportunities. Nice work.

TBH, I am very hesitant to upload my CC logs to a third-party service.

show comments
mbesto

So what conclusions have you drawn or could a person reasonably draw with this data?

show comments
blef
show comments
marconardus

It might be worthwhile to include some of an example run in your readme.

I scrolled through and didn’t see enough to justify installing and running a thing

show comments
KaiserPister

This is awesome! I’m working on the Open Prompt Initiative as a way for open source to share prompting knowledge.

show comments
alyxya

Why does it need login and cloud upload? A local cli tool analyzing logs should be sufficient.

show comments
ekropotin

> That's it. Your Claude Code sessions will now be uploaded automatically.

No, thanks

show comments
anthonySs

is this observability for your claude code calls or specifically for high level insights like skill usage?

would love to know your actual day to day use case for what you built

show comments
mentalgear

How diverse is your dataset?

show comments
dboreham

One potential reason for sessions being abandoned within 60 seconds in my experience is realizing you forgot to set something in the environment: github token missing, tool set for the language not on the path, etc. Claude doesn't provide elegant ways to fix those things in-session so I'll just exit, fix up and start Claude again. It does have the option to continue a previous session but there's typically no point in these "oops I forgot that" cases.

cluckindan

Nice. Now, to vibe myself a locally hosted alternative.

show comments
lau_chan

Does it work for Codex?

show comments
vova_hn2

This is so sad that on top of black box LLMs we also build all these tools that are pretty much black box as well.

It became very hard to understand what exactly is sent to LLM as input/context and how exactly is the output processed.

show comments