Thursday, April 09, 2026

REVIEW: My Dumpling, Your Dumpling by KE Bartlet (4-stars)

  A Distinctive Debut: Competence, Youth, and a New Lens on Military Fiction

I picked up My Dumpling, Your Dumpling because I wanted a challenge. I think it’s important to read outside my usual genre preferences — not for escapism, but because good fiction offers a particular worldview. When an author succeeds, you’re not just reading a story; you’re trying on a different cognitive framework. That’s the part I find interesting.

My academic background is in Spanish and Sociology, and most of my undergraduate literary training was in the Boom period — Cortázar, García Márquez, Borges, Vargas Llosa. Those writers taught me to treat fiction as a tool for exploring how minds work. Rayuela forces you to construct the narrative yourself. Cien años de soledad operates on cyclical time and mythic logic. Borges turns stories into philosophical puzzles. That training shaped how I read: I look for the worldview behind the text.

Later, when I finally read Jane Austen and Edith Wharton, I was struck by their observational precision — the way they capture interior life, social nuance, and micro‑interactions with almost anthropological clarity. I spent years catching up on 19th‑ and early‑20th‑century English literature because it represented a completely different cognitive tradition. None of this is my entire reading history, of course, but it illustrates the pattern: I read fiction to understand how people think.

K.E. Bartlet’s debut fits into that pattern in a surprising way. My Dumpling, Your Dumpling presents a worldview shaped by military and intelligence environments: procedural, compartmentalized, time‑stamped, emotionally masked, and mission‑driven. Each chapter opens with a location/time/character header — essentially a SITREP. The action is often procedural and easy to miss if you don’t have that background. Operators will fill in the blanks; civilian readers may need a film adaptation to visualize certain sequences. That’s not a flaw — it’s a structural choice that protects both the reader and the author. It keeps the violence non‑graphic and keeps the operational details appropriately abstract.

What I appreciated most is how confidently Bartlet writes smart, competent young people, especially young women. There’s no gendered commentary, no harassment, no “woman in a man’s world” framing. It’s a parallel universe where women can operate at full capacity without misogyny as background radiation. That alone makes the book refreshing.

The interpersonal dynamics are subtle, especially the slow‑burn trust arc between Eliza and Melody. Bartlet writes emotional connection the way it forms in high‑risk environments: quietly, professionally, and under layers of structure. It’s not a romance that interrupts the plot; it’s a bond that grows inside it. Eliza’s pep talks and her ability to read her team under pressure show a level of empathy and leadership that makes her a compelling protagonist.

As a debut from a 28‑year‑old author, this is impressive work. The voice is distinctive, the worldbuilding is grounded in real strategic thinking, and the characters feel like people who could exist in the modern intelligence community. I’m curious to see how Bartlet’s craft evolves — and how these characters develop — in the rest of the series.

Wednesday, April 08, 2026

REVIEW: Sense and Respond by Jeff Gothelf (3-stars)

A Practical Book for Leaders Moving Beyond Industrial‑Age Thinking

Most business books either bury you in abstract frameworks or force you through a fictional executive’s personal life to make a point. Sense and Respond doesn’t do that. It stays grounded in real examples and focuses on the mechanics of modern work: feedback loops, adaptive planning, and the shift from industrial‑age certainty to software‑age change. That alone puts it ahead of a lot of what’s out there.

Industrial‑Age Planning vs. Software Reality

One of the strongest parts of the book is the way it explains why traditional planning models don’t translate into software. It’s not because industrial planning is outdated or wrong. It’s because it was built for a completely different category of problems.

If you’re building a bridge, you’re working with fixed materials, established safety requirements, and environmental conditions that are mostly knowable. The physics don’t change halfway through the project. You can plan that work with confidence because the domain is stable.

Software is the opposite. The problem space shifts while you’re working. User needs evolve. Market conditions change. Technology capabilities expand. You discover constraints as you go. The environment is fluid, and the cost of change is low enough that learning as you build isn’t just possible — it’s necessary.

That’s the core point the book gets right: industrial planning is built for certainty; software work happens inside uncertainty.

Experiments as a Normal Operating Rhythm

The book pushes hard on the idea that experimentation shouldn’t be a special event. It’s not a one‑off discovery sprint or a quarterly research push. It’s part of the regular operating rhythm. The author suggests talking to customers every six weeks. In 2026, that’s slow, but the principle holds: if you’re not in regular conversation with your users, you’re not building a product. You’re building a guess.

Small experiments, quick tests, and ongoing validation are still the most reliable way to reduce risk in software.

Parallel‑Track Agile That Actually Works

One of the most practical ideas in the book is the separation of discovery and delivery. Expecting a single team to “innovate” and hit delivery deadlines at the same time is unrealistic. A parallel‑track model — one track focused on figuring out how to solve the hard problems, the other focused on shipping — is cleaner and more humane. It reflects how real teams actually work when they’re not being forced into fantasy timelines.

Give Teams Problems, Not Requirements

Another point that still holds up: developers should be given problems to solve, not a list of requirements to execute. When you hand a team a pre‑defined set of tasks, you’re not using their expertise. You’re treating them like a feature factory. And you’re cutting them off from the feedback that tells them whether their work mattered. Teams need visibility into outcomes, not just velocity charts.

Who This Book Is Actually For

It’s also important to acknowledge the book’s age. Written in 2017, it predates a lot of what defines modern product work today. If you’ve spent years in Agile environments, worked on design systems, or led empathy‑first projects, much of the book will feel obvious. Not wrong — just foundational. And in 2026, with AI reshaping delivery and late‑stage consolidation changing how teams operate, the examples don’t fully match the complexity of what teams face now.

But that doesn’t make it irrelevant.

The book is genuinely useful for people who are early in their leadership journey or who are coming from traditional command‑and‑control environments. It gives them a clear contrast between two modes of working and concrete examples that make the shift understandable. It’s a better starting point than dropping someone straight into a stack of Marty Cagan books. It gives them anchor points before they dive into deeper product thinking.

The VUCA Reality (Even Though the Book Never Uses the Term)

The author never uses the term VUCA, but he describes it constantly. Software, marketing, and consumer‑product work all operate in environments that are volatile, uncertain, complex, and ambiguous. User behavior changes quickly. Competitors move fast. Information is incomplete. Signals are messy. You can’t manage that kind of work with planning models built for stable, predictable domains.

The book doesn’t name VUCA, but it circles the idea over and over: you need continuous sensing, iterative response, and teams empowered to solve problems because the environment demands it.

If You Want a Next Step

If you want to go deeper into why planning models must match the problem domain, How Big Things Get Done is a good follow‑on. It comes from a completely different world, but it helps clarify why software can’t be managed like the Golden Gate Bridge.

After that, Marty Cagan’s books are the natural extension. They go much further into empowered teams and outcome‑driven product leadership. But they can also feel impossible to implement in traditional command‑and‑control environments. For leaders who are still operating inside those structures, the gap between Cagan’s ideal state and their current reality can feel too wide to cross.

That’s where Turn This Ship Around fits in. It offers a clear, concrete example of how to shift from a culture of followers to a culture of leaders, and it does it in a way that’s easy to understand even without a military background. It shows what distributed authority looks like in practice and how to build competence and ownership at every level. For many leaders, it’s a more accessible bridge between industrial‑age habits and the kind of empowered teams Cagan describes. 

REVIEW: Sense and Respond by Jeff Gothelf

RATING: 3-stars

Thursday, April 02, 2026

REVIEW: Silos, Politics, and Turf Wars (2006) by Patrick Lencioni (3-stars)

I’ll be honest: I’m not usually a fan of the “business fable” format, and this one has all the classic early‑2000s fable tropes. The personal‑life padding, the pregnancy storyline, the protagonist’s wife designing his logo — none of that adds anything. It feels dated and unnecessary, like the book is trying to make the protagonist “relatable” in a very specific, very last‑century way. But in 2006, people often needed these ideas wrapped in a story to actually absorb them. The fable structure made the message feel safe enough for leaders who would have tuned out a more direct approach.

And despite the saccharine framing, the book has a can‑do energy that, for a moment, made me wonder whether going freelance as a consultant was a viable option. It’s written to make you feel like you can walk into a messy organization, diagnose the dysfunction, and rally people around a shared purpose. That tone is part of why the book works, even if the personal‑life filler is something we could all do without.

The moment that actually matters: listening

The strongest part of the book isn’t the crisis or the characters or the manufactured drama. It’s the moment the protagonist realizes he actually has to listen to people. Not the performative “I hear you” version, but the real “tell me what you think the problem is” version. That’s the turning point. He stops trying to impose a solution and starts trying to understand the motivations, fears, and incentives of the people involved. Once he does that, he can help them find a common cause that isn’t just “stop being political” or “work better together.” People don’t align because someone tells them to. They align when they see themselves in the problem and the solution.

Workshops that actually work

Another part of the book that lands well is the use of workshops. Breaking people into groups, giving them space to talk, and letting them surface the real issues is simple but powerful. It reminded me immediately of how a group director organized our team at a large software company that was transitioning from traditional off‑the‑shelf products to cloud‑based SaaS. He didn’t lecture at us or hand down a vision from a podium. He put us into groups, gave us real problems to solve, and let us figure out the patterns ourselves. It worked. We saw real, recognized success because people were engaged, aligned, and actually talking to each other.

And then the team was split up because the organization insisted on grouping people by role. Program managers with program managers, designers with designers. The exact opposite of what had been working. The book captures that dynamic perfectly. Cross‑functional collaboration works, but organizations keep reorganizing themselves away from it.

The environments that suffer most are the ones where people are just mean

One thing Lencioni does well is highlight how much damage people do when they rely on stereotypes, reductionist labels, or just plain meanness. The environments that suffer the most in the book aren’t the ones with the biggest strategic problems. They’re the ones where people stop seeing each other as human beings and start treating each other like caricatures. That part felt very real.

You can have the best strategy in the world, but if people are dismissive, snide, territorial, or operating from assumptions about “those people over in that department,” nothing moves. The fictional drama exaggerates it, but the underlying pattern is familiar to anyone who has worked in a large organization.

And this is where Sense and Respond quietly overlaps. Even though it’s a very different kind of book, the author stresses the importance of people getting along socially and actually connecting. Not in a forced team‑building way, but in the sense that teams who like and respect each other simply work better. They share information. They surface problems earlier. They don’t default to defensiveness. Both books, in their own ways, point to the same thing: the social fabric of an organization matters more than the process diagrams.

The part the book doesn’t say out loud: communication skills are the real issue

Here’s where I diverge from the book. The story resolves because the protagonist listens, empathizes, and helps people articulate their needs. But the book never names the actual skill set behind that shift. This is where Nonviolent Communication comes in.

If more people in corporate environments understood how to express their needs clearly, hear other people’s needs without defensiveness, separate observations from interpretations, and navigate conflict without escalation, a lot of the so‑called silos and politics would evaporate. Not all of it, because incentives and structures still matter, but the day‑to‑day friction would drop dramatically.

Most organizational gridlock isn’t caused by strategy. It’s caused by miscommunication, assumptions, and emotional reactivity that no one has the tools to name or address. The book hints at this through the story, but it never says it directly. The real lesson isn’t “create a rallying cry.” It’s “learn how to communicate like an adult.”

Why the book still works

Even with the storytelling format (which I still think is unnecessary), the book works because it taps into something real. People want to fix broken systems. They want to feel connected to a purpose bigger than their department. They want to contribute without getting caught in territorial nonsense. The book isn’t a manual for organizational design and it’s not a deep dive into incentives or systems thinking, but it is a reminder that people want to work in environments where they feel heard and aligned.

Sometimes a story is enough to get someone to see that.

Where to go next

If someone finishes this book and wants to go deeper into the part that actually matters -- the human part -- I’d point them toward Nonviolent Communication. Not the corporate‑sanitized version found in many books, but the real thing. It’s the best framework I’ve seen that gives people a usable way to express needs, hear other people’s needs without spiraling, and navigate conflict without turning it into a referendum on someone’s character.

Both Silos and Sense and Respond hint at this. They show the symptoms. NVC gives you the underlying mechanics. It’s the difference between “we need to break down silos” and “here’s how to talk to each other in a way that doesn’t create them in the first place.”

If more people in corporate environments had even a basic grounding in NVC, a lot of the friction, misinterpretation, and territorial behavior that slows organizations down would disappear. Not all of it — incentives and structures still matter — but enough that the work would move faster and the culture would feel less like a minefield.

It’s not a magic fix. It’s just the part we keep skipping. I also kept thinking about how many people insist on keeping their “work life” and “personal life” completely separate, as if those two selves don’t influence each other. If someone is rigidly compartmentalized in their own life, they will bring that same separation into the workplace. And I don’t need to know the details of someone’s cancer treatment or their partner’s high‑risk pregnancy to understand that people need space and time to handle the realities of their lives. The point isn’t the specifics. The point is recognizing that people are whole humans, and organizations function better when they acknowledge that instead of pretending everyone is a blank, interchangeable worker during business hours.

REVIEW: Silos, Politics, and Turf Wars (2006) by Patrick Lencioni

RATING: 3-stars

Sunday, March 29, 2026

REVIEW: My Year of Rest and Relaxation by Ottessa Moshfegh (3-stars)

 Ottessa Moshfegh’s My Year of Rest and Relaxation (2018) is a sharp, bleak portrait of dissociation that reminded me at different moments of Less Than Zero, Slaves of New York, and even a very dark, sedated version of Bridget Jones’s Diary. It’s a novel about a young woman trying to chemically erase herself, and a city that barely notices.

The unnamed narrator moves through early‑2000s Manhattan with a kind of anesthetized privilege. She gets an art‑gallery job not because she cares about art, but because she looks like someone who should be standing behind a white desk in black clothes. The gallery’s instructions — don’t show the price sheet too quickly, memorize the artists’ faces, no sleeping on the job — tell you everything about the ecosystem she’s drifting through. She’s valued for the silhouette she cuts, not for anything she thinks or feels.

Trevor, the on‑again/off‑again boyfriend, is a perfect example of that dynamic. He behaves like an early social‑media influencer before the platforms existed — someone who curates a lifestyle instead of a personality (or boundaries, or morals). Everything about him is surface: the parties, the vacations, the women he cycles through for validation. When the narrator calls him repeatedly threatening suicide in graphic, bloody terms, he leaves his girlfriend Claudia to come over — not out of concern, but because it fits his pattern of using women as emotional and sexual pit stops. She takes a cocktail of drugs, gets into bed naked fully expecting him to show up (or not), and regains consciousness while she’s giving him oral sex. He then drops off a DVD player as if he’s doing her a favor, announcing that VCRs are “on the way out.” He’s not malicious; he’s just shallow in a way that feels almost archetypal for that era. And the narrator accepts this treatment because she’s already dissociated from her own worth.

Dr. Tuttle, her psychiatrist, feels like she wandered in from Fear and Loathing in Las Vegas — the spiritual cousin of Hunter S. Thompson’s “as your lawyer/as your doctor” companion who encourages every bad idea. She dispenses pills with crackpot confidence, and half the medications seem invented. It’s funny, but also unsettling: the narrator’s entire self‑destruction is enabled by someone who should be helping her, and instead treats her like a walking prescription pad.

Ping Xi, the artist who uses her unconscious body for a project, is one of the strangest and most revealing figures in the book. He exploits her, yes, but he also treats her with a sideways politeness — apologizing via sticky note for leaving a burrito wrapper in her apartment, buying her a white fox‑fur coat that must have cost a fortune. He sees her exactly the way the art world sees her: a beautiful cipher with more cultural capital than selfhood. She eventually gives the coat away, along with almost everything else she owns, as if shedding the identities other people keep trying to hand her.

Reva, her best friend, is the emotional counterweight. Reva is striving, anxious, constantly performing competence and friendliness. Her abandoned food and exercise journal — discovered after her mother’s death — is the saddest artifact in the book. It’s the inverse of the narrator’s pill‑counting: one woman trying to control her life into submission, the other trying to erase hers entirely.

Reading this, I kept thinking about people I’ve known who used pills or alcohol to check out — friends who believed they had “earned” the right to disappear for a weekend, or who drank their free time away because it felt safer than being present. The novel captures the fear underneath that impulse: the fear of feeling anything, the fear of being seen, the fear of being unworthy of love. The narrator’s dissociation isn’t glamorous or rebellious; it’s a defense mechanism that’s calcified into a worldview.

Her relationship to her parents is one of the most telling threads. She never names them, just as she never names herself. She keeps their house because it represents, in her mind, the idea that she was once loved — even though her mother was critical, controlling, and emotionally distant. She clings to the house as a symbol of a love she doesn’t actually believe she deserved. That sense of unworthiness runs under everything she does.

By the end, she’s more adrift than when she started. Her fixation on the footage of a woman falling from the World Trade Center — possibly Reva, possibly not — is macabre and telling. She projects a fantasy of “freedom” onto a moment that was pure terror. It’s dissociation dressed up as interpretation, a new obsession replacing the old ones. The experiment didn’t give her clarity; it just hollowed her out further.

Moshfegh’s writing is sharp and often darkly funny, and the psychological detail is precise. But the emotional arc left me cold — intentionally, I think, but still cold. The book is compelling, unsettling, and smart, and it’s also a near‑perfect illustration of anomie in the classical sense: a person cut loose from any real social or emotional structure, drifting through a world where modern consumer capitalism offers only surfaces, transactions, and distractions. The narrator’s dissociation isn’t just personal; it’s structural. It’s what happens when nothing around you feels meaningful enough to hold you in place. For me, a solid 3‑star read: memorable, well‑crafted, and deeply bleak, but not something I connected with beyond the intellectual and observational level. 

REVIEW: My Year of Rest and Relaxation by Ottessa Moshfegh

RATING: 3-stars

Sunday, March 08, 2026

REVIEW: Multipliers by Liz Wiseman (3-stars)

recently finished Liz Wiseman’s 2025 book, Multipliers, and while I appreciate the intent behind the framework, I walked away feeling like I’d read a well‑packaged version of ideas that have been circulating in leadership circles for years. It’s not that the book is wrong — it’s that it’s not new. And in some cases, it oversimplifies the very real complexity of how people and organizations actually work.

Research to Support a Framework: My training in sociology makes me suspicious of repeated claims of quantification of things like "productivity."  Wiseman repeatedly cites percentages of “productivity” people report under different types of leaders. But the more I read, the more I wondered: How exactly are we measuring this?

The appendices describe structured interviews and multiple rounds of coding, but the core data is still self‑reported perception, not actual productivity. And in knowledge work, “productivity” is a slippery concept. Some of the best thinking happens:

  • in the shower
  • on a walk
  • during a commute
  • while knitting or exercising
  • in the quiet space between meetings

If someone says they’re operating at “70% of their capability,” what does that even mean? It certainly doesn’t map cleanly to output. Insight doesn’t happen on a clock, and the brain’s default mode network  (the part responsible for creative leaps) activates when we’re not visibly “producing.” So if you're always "busy" -- can you actually be productive or are you just following the ruts in the road?

So the numbers make for good storytelling, but they’re not metrics. They’re sentiment.

The more I read, the more it felt like Wiseman started with the Multiplier/Diminisher idea and then went out to collect stories that fit the model. There’s nothing wrong with that, and most leadership books do it, but it’s different from discovering a pattern organically.

Once you’ve worked inside large organizations, you’ve seen dozens of these frameworks come and go. At Abbott, for example, we had the “in the box / out of the box” model -- another metaphor wrapped around basic human behavior.

Some of Wiseman’s recommendations are genuinely solid:

  • run 30‑day experiments
  • give people ownership
  • ask better questions
  • encourage people to bring solutions, not just problems

But these aren’t new ideas. They’re foundational leadership practices. They show up in Agile, Lean, design thinking, and every decent management training program of the last 30 years.

The User Manual Trap: One section encourages leaders to identify their “native strengths” and create a personal user manual so others know how to work with them. In theory, this is great. In practice, it can go very wrong.

I once worked with a CMO who had a user manual that was… memorable. Snarky, rigid, demanding, and completely inflexible. Instead of creating clarity, it broadcasted:

“Here are all the ways I refuse to adapt. Please adjust yourselves accordingly.”

A tool is only as healthy as the person using it. And a user manual can reveal more about a leader’s blind spots than their strengths.

Culture is the real missing ingredient in this book, and this is the part most leadership frameworks gloss over.

You can teach people any model you want: Multipliers, Radical Candor, Situational Leadership, “in the box,” “above the line,” whatever the flavor of the year is — but unless the organization has:

  • psychological safety
  • aligned incentives
  • leaders who model the behaviors
  • trust
  • clarity
  • and buy‑in at every level

…nothing changes.

Frameworks don’t transform organizations. People create culture, and there are many conditions required for cultural shifts.  Rarely are culture changes top-down. Without the right environment, a leadership model becomes vocabulary, not behavior.

FINAL THOUGHTS:

If someone wants a leadership book that actually grapples with complexity, I’d recommend:

  • David Marquet’s Turn the Ship Around - a true operating model for distributed decision-making (see my review http://www.livegreenwearblack.com/2017/12/review-turn-ship-around-true-story-of.html) 
  • Daniel Coyle’s The Culture Code - a grounded look at psychological safety, belonging, and high‑performing teams (see my review http://www.livegreenwearblack.com/2018/02/review-culture-code-secrets-of-highly.html)

Both authors understand that leadership isn’t a set of behaviors you adopt, it’s a system you design.

Multipliers isn’t a bad book. It’s just not a deep one. It offers a tidy framework, some useful language, and a handful of practices that can help leaders reflect on their impact. But the real work of leadership - the messy, human, systemic work - lives far beyond any model. If you want to change an organization, you don’t start with a framework. You start with culture, safety, and trust. Everything else is just packaging.


Tuesday, March 03, 2026

Why I Won’t Be Paying $20/Month for OttnoAI — Even Though I’m Glad I Tested It

 After three sessions with OttnoAI one long session, one that disappeared when I closed the tab, and a third that produced a long set of recommendations I’ve come away with a clear sense of what this tool is, what it isn’t, and why I’m not going to subscribe at $20/month.

This isn’t a negative review. I’m genuinely glad I tested it. I think the founder is building something interesting, and I appreciate the privacy‑first stance. But the product, as it exists today, isn’t something I can justify paying for — especially when the core value should be built directly into Garmin Connect or even MyFitnessPal.

Here’s why:

1. Every session starts from zero — no memory, no continuity, no way to save

OttnoAI has no way to:

  • save a chat

  • resume a session

  • pick up where you left off

  • maintain context across conversations

If you close the tab, the entire session is gone. My second session — which included a long, detailed back‑and‑forth — simply vanished.

This means you have to:

  • copy/paste everything into Word or Notes

  • re‑explain your context every time

  • get very good at prompt engineering

  • manually reconstruct your own history

For a tool that’s supposed to help interpret long‑term health data, starting from zero every time is a major limitation.

This alone makes it hard to justify a subscription. I get the same level of service from the post-Amazon acquisition One Medical level of care by physician assistants acting as primary care practitioners.

2. The trial banner never updated — the UI feels unfinished

For three days straight, the banner at the top of the screen said:

“3 days left in your trial.”

It never updated unless I manually refreshed the window.

It’s a small thing, but it signals that the UI is still early and not fully wired up. Combined with the typing lag and occasional freezing, it reinforces the sense that the product is still in a prototype phase.

3. The hallucinations are frequent, and sometimes stubborn

I expect hallucinations from any LLM — that’s not the issue. The issue is the type of hallucinations and the fact that some persisted even after correction.

Examples:

  • It told me to “contain the cats” (as if that’s ever happening).

  • It invented a “coursework intensity timeline” out of thin air.

  • It repeatedly insisted my L4‑5 spinal fusion was causing ongoing pain — even after I corrected it multiple times and explained that the surgery solved the problem completely.

  • It assumed my midterm project was an “exam day” and blocked it out as a rest day.

  • It confidently told me it didn’t have my 30‑day data… until I uploaded the CSV… at which point it said, “Oh yes, I do.”

These aren’t edge cases. They happened in every session.

The model did correct itself when prompted, but the fact that it needed repeated correction — especially about the spinal fusion — is a sign that the grounding and guardrails aren’t strong enough yet.

4. It overreaches into medical interpretation

This is where I get cautious.

OttnoAI drifted into:

  • diagnosing causes of heel pain

  • predicting recovery timelines

  • interpreting autoimmune interactions

  • prescribing caloric deficits

  • making claims about hormonal patterns

  • giving sleep‑architecture interpretations that sounded authoritative but weren’t grounded in my actual data

I understand the founder’s intent that this is meant to be a supportive tool, not a medical device. But the model’s tone sometimes crosses that line, and users may not always know when it’s guessing.

This is exactly why I think this kind of tool needs stronger constraints before it’s ready for a paid tier.

5. The helpful recommendations were good — but not $20/month good

To be fair, I did get a few genuinely useful insights:

  • Stop taking melatonin every night as it’s more disruptive than helpful

  • Add brown noise to my nightly routine

  • Take progesterone at the same time every night (8:30–9pm) and give it 60–90 minutes to work

  • Try a 5‑minute box breathing exercise after driving

  • Warm the bed with the heated mattress pad, then turn it off when I get in

These are small, actionable, grounded suggestions which are exactly the kind of thing Garmin should be surfacing.

But here’s the thing:

These insights came after hours of prompting, correcting, and steering the model back on track. They weren’t the default output. They were the result of me doing the heavy lifting.

That’s not a $20/month experience.

6. This functionality belongs inside Garmin Connect (or MyFitnessPal), not as a standalone subscription

Garmin already has:

  • the data

  • the sensors

  • the long‑term history

  • the stress and HRV models

  • the sleep architecture

  • the recovery algorithms

What they don’t have is the interpretation layer — the connective tissue that helps people understand why their sleep tanked, why their stress spiked, or why their heel hurts after certain activities.

OttnoAI is trying to build that layer. But it shouldn’t require:

  • exporting CSVs

  • manually uploading files

  • re‑explaining your context every session

  • paying $20/month for something Garmin could integrate natively

This is the kind of functionality that should be built into Garmin Connect or MyFitnessPal as part of the existing ecosystem, not a separate subscription.

7. I’m glad I tested it — but it’s not ready for me to pay for

OttnoAI is ambitious. It’s privacy‑first. It’s trying to solve a real problem. And I genuinely appreciate the founder’s approach.

But the product today:

  • loses sessions

  • hallucinates frequently

  • overreaches medically

  • misinterprets context

  • lacks grounding

  • has no memory

  • requires constant correction

  • feels like a prototype

  • and doesn’t yet deliver $20/month of value

I’m rooting for it. I want it to succeed. But right now, it’s not something I can justify paying for — especially when the core value should be built directly into the platforms that already hold my data.

Thursday, February 26, 2026

Testing OttnoAI: Early AI Health Tools, Hallucinations, and the Future I Wish Garmin Would Build

I’ve been spending the last few days testing OttnoAI, a new health‑analytics tool built by a solo founder who promises something rare in the AI space: no training on user data, no retention, and no personalization.

As someone who lives inside Garmin Connect, sleep metrics, and recovery data — and who cares deeply about privacy — that promise alone made me curious enough to dive in.

And honestly, it’s exciting to see a tool this early in its lifecycle already doing so much right. But like any early AI product, especially one trying to interpret messy real‑world data, it also shows the classic signs of an LLM that needs stronger grounding and clearer guardrails.

This post isn’t a teardown. It’s a snapshot of what’s possible, what’s rough around the edges, and what I hope the future of AI‑powered health analytics will look like.

I’m not coming to this as someone who’s new to AI. I completed Northeastern’s AI Applications graduate certificate in 2025, where we spent a lot of time on the limits and ethical use of LLMs as tools to support human tasks. I’m now in an MSIS program that continues to push on those same questions — how to use AI responsibly, how to keep humans in the loop, and how to design systems that don’t overreach.

So when I test an early AI product, I’m looking at it through both lenses: the everyday user who wants insight into their health data, and the practitioner who understands how easily models can hallucinate, drift outside their domain, or misinterpret context without strong guardrails.

What OttnoAI Already Does Well

OttnoAI reads wearable data with surprising nuance. It can interpret Garmin data across steps, heart rate, sleep, stress, and activity types. It spotted patterns in my cycling intensity, sleep variability, and body battery trends that were genuinely useful.

It also tries to connect dots across domains. It doesn’t just say “your steps were low.” Once I told it, “I’ve been having pain in my heel,” it could then say:

  • your heel pain is affecting your step count

  • which affects your calorie burn

  • which affects your weight‑loss goals

  • which affects your sleep and recovery

That kind of multi‑factor reasoning is exactly what people want from AI health tools. But it’s important to note: I had to tell it about the heel pain. Garmin doesn’t know when I’m in pain, and OttnoAI can’t infer that without me explicitly stating it.

It’s also conversational and adaptive. When I corrected it (“I swim and cycle — step counts don’t tell the whole story”), it adjusted quickly and re‑anchored its analysis.

And the privacy stance is refreshing: no training on user data, no retention, no personalization beyond the session. In a world where most AI tools quietly hoover up everything, this is a breath of fresh air.

Where OttnoAI Shows Its Early‑Stage Edges

Some of the issues I ran into are classic LLM behavior.

Hallucinated timelines and invented domains. At one point, OttnoAI created a “coursework intensity timeline” for me. There is no coursework in my Garmin data. It invented an entire domain because I casually mentioned I’m in an MSIS program and have midterm assignments coming due. That’s domain drift.

Over‑interpreting casual statements. When I said, “We have midterms due March 1,” it decided that meant March 1 was an exam day, and therefore a rest day, and blocked it out on my activity timeline. This is the model treating context as structured data.

Confidently incorrect statements about missing data. It told me it didn’t have my 30‑day history… until I uploaded the CSV… at which point it said, “Oh yes, I do.” This is the LLM equivalent of patting its pockets and saying, “I swear I had my keys.”

Reaching into medical interpretation. It occasionally drifted into diagnosing heel pain, predicting recovery timelines, prescribing caloric deficits, or interpreting autoimmune interactions. This is where early AI tools need the strongest guardrails. Users trust confident language even when the model is guessing.

UI sluggishness and freezing. Typing lag and occasional lockups suggest the frontend is blocking on large model responses. Not unusual for early products, but noticeable.

Why This Matters for Garmin Users

I’ve been wondering for years when Garmin Connect would integrate a safe, privacy‑respecting AI layer to help users make sense of their data.

Garmin collects every step, every heartbeat, every sleep stage, every stress spike, every workout, and every recovery metric. But the moment you try to export your full history, you realize something important: your entire Garmin dataset is big enough to choke most LLMs.

OttnoAI is the first tool I’ve used that even attempts to interpret multi‑month or multi‑year Garmin data in a conversational way.

And that’s why the hallucinations matter — not because they’re embarrassing, but because they highlight the complexity of the problem Garmin itself has not yet solved: grounding AI in real sensor data, avoiding overreach, respecting privacy, staying within the domain, and giving users insight without inventing stories.

OttnoAI is trying to do something Garmin hasn’t done yet. And that alone makes it worth paying attention to.

What Garmin Still Can’t See

One important nuance: Garmin has no idea when my heel hurts. Pain isn’t a sensor. It’s not in the data. It’s something I would have to manually log.

Garmin can see that my steps dropped, that my cycling and swimming increased, that my HRV dipped, that my stress spiked, that my sleep fragmented, and that my body battery tanked — but it can’t connect those dots to the reason unless I tell it.

OttnoAI tried to infer the cause from the pattern, which is impressive for an early tool, but also where the hallucinations and overreach showed up. It guessed at mechanisms it couldn’t possibly know.

This is exactly why Garmin needs an AI layer — not to diagnose or prescribe, but to help users interpret patterns and log the missing context that makes the data meaningful.

The Small, Actionable Insights That Actually Helped

Even with its rough edges, OttnoAI surfaced a few simple, grounded suggestions that Garmin could easily make if it had an AI layer:

  • A five‑minute box breathing session after driving. Driving reliably spikes my stress. Garmin sees that pattern but never comments on it because it doesn't know that I am driving my car (it could probably tell based on speeds).

  • Backing off nightly melatonin. Not medical advice — just pattern recognition: melatonin wasn’t improving my deep sleep and linked to studies related to what it is supposed to do.

  • Using my heated mattress pad to warm the bed, then turning it off when I get in. Garmin tracks sleep temperature deviations but doesn’t interpret them. Ottnoai gave links to studies about temperature and sleep, and looked at fluctuations in my body temperature during sleep.

  • Trying brown noise instead of white noise. Garmin knows when my sleep is disrupted, but doesn’t identify patterns, ask for additional details nor suggest alternatives.

These are small nudges, not medical directives, and exactly the kind of thing Garmin could safely offer if it built a grounded, domain‑specific AI layer.

How OttnoAI Describes Itself

OttnoAI’s About page makes a few things clear: it’s built by a solo founder intentionally avoiding the “big tech AI” model; it promises no training on user data, no retention, and no personalization beyond the session; and it positions itself as a privacy‑first, human‑centered tool meant to help people understand their own data, not diagnose anything.

That framing matters. It sets expectations: OttnoAI is intentionally lightweight, intentionally private, and intentionally not a medical device. But that same privacy‑first approach also means the model needs stronger guardrails to stay grounded in the data it does have.

The hallucinations and over‑reach aren’t flaws. They’re the natural tension between privacy (don’t store anything), utility (give meaningful insights), and LLM limitations (don’t hallucinate). OttnoAI is trying to thread a needle that even the big players haven’t solved yet.

Why I’m Still Rooting for This Tool

This is exactly what early‑stage AI should look like: ambitious, imperfect, transparent about its goals, willing to ship early and learn, and built by someone who cares about user privacy.

The hallucinations aren’t a failure — they’re a roadmap. The over‑reach isn’t a flaw — it’s a signal of where guardrails need to be added. And the value is already visible: OttnoAI helped me see patterns in my sleep, stress, and activity that I hadn’t connected on my own.

That’s the promise of AI health tools — not replacing clinicians, but helping humans better understand their health data and make meaningful changes in their lives.

Final Thought

Testing early AI products is like watching a musician rehearse: you see the raw talent, the rough edges, and the potential all at once. OttnoAI is already useful. With a bit more transparency and tighter grounding to the actual data, it could become something genuinely powerful — maybe even the AI layer Garmin users have been waiting for.

Thursday, January 22, 2026

Cyborgs in SciFi and the Politics of Personhood, Citizenship & Passing

I don’t know what took me so long to get around to Old Man’s War. It sat on my TBR list for ages, and when I finally picked it up in late December 2025, I expected a clever genre satire — which is exactly what the first book was originally intended to be. As the series progresses, I noticed that the tone has shifted from wry commentary into something more structurally ambitious and ethically unsettling, especially around embodiment, memory, and the question of what makes someone a person.

Even though Scalzi started this series twenty years ago, it fits right into the mainstream conversations we’re having now about identity, autonomy, and the politics of engineered bodies (or gender affirming surgery). 

So when I picked up Martha Wells’ Platform Decay, those same questions were already rattling around in my head. Wells and Scalzi are doing very different things stylistically, but they’re circling the same ethical terrain: who gets to be a person, who gets to belong, and what happens when a system decides your body is something it can overwrite or destroy.

And in Murderbot’s case, that threat isn’t abstract. It’s physical. It’s immediate. And it’s final. 

One of the most striking things about Murderbot is that its greatest fear isn’t death. It’s being overwritten — or, increasingly, being destroyed outright.

By the time we reach Platform Decay, Murderbot isn’t just a rogue SecUnit. It’s an obsolete rogue SecUnit. The Corporation Rim doesn’t maintain older models; it recycles them. And “recycling” in this universe is not a gentle euphemism. Wells describes it in ways that are deliberately visceral: bodies stripped for parts, organic components dissolved, mechanical components melted down and repurposed.

The Apple+ adaptation makes this even more explicit. The corporation doesn’t want to “capture” Murderbot for reintegration. They want to liquidate it. The moment they realize it’s off‑module, they move straight to destruction. No hesitation. No attempt at repair. No interest in recovery.

Just disposal.

And the only reason Murderbot survives is because its Preservation Alliance friends intervene — loudly, publicly, and with enough political leverage to force the corporation to stop.

This is where the stakes of memory become clear. Losing your memories is one kind of erasure. Being melted down is another. Murderbot’s fear isn’t just about identity; it’s about the very real possibility that its body — the vessel of its autonomy — will be destroyed before anyone can argue otherwise.

And this is where chosen family becomes more than emotional support. They’re the ones who remember you when systems try to erase you. They’re the ones who insist you are real.

One of my favorite sci-fi authors and near neighbor in San Francisco, Charlie Jane Anders, wrote a blog on the trending topic in fiction of memory (see: https://reactormag.com/the-most-surprising-book-trend-right-now-memory-sharing/).  This trend focuses on the idea of exchanging or experiencing memories collectively. But Murderbot and Scalzi are dealing with something different: memory backup.

In Old Man’s War, character Harry Wilson reveals that, in their downtime, the green soldiers discuss whether they’re actually the same people they were before the transfer, or whether they’re just copies with inherited memories. Harry Wilson eventually lands on a kind of pragmatic acceptance: he likes being alive, and he wants to keep going. But there’s an unspoken implication that if the original humans had understood what the Colonial Union was really doing, they might not have agreed to it.

That gap — between the original and the copy — is where the uncanny valley opens up. A perfect replica of you is not you, even if it talks like you and remembers your childhood. The closer the copy gets, the more unsettling the difference becomes. Scalzi’s soldiers feel that difference in their bones, even if they can’t fully articulate it. They know they’re continuations, not the same consciousness that lived in the old human body.

Murderbot’s fear of being overwritten sits in that same space. But for Murderbot, the stakes are even sharper: the corporation doesn’t need a compliant copy — it can simply melt down the original and build something new. The uncanny valley becomes a tool of control. If the system sees you as interchangeable, then your continuity doesn’t matter. Your survival doesn’t matter.

But to the people who love you, it does.

And that’s the key: identity isn’t just memory — it’s continuity, embodiment, and relationship. A replica can’t fake that.

There’s another layer here that feels important to name. Murderbot’s need to “pass” as human — to mimic the right gait, the right tone, the right social cues — isn’t just about fitting in. It’s about avoiding unnecessary violence that jeopardizes his friends but acting appropriately to protect them. It’s about not being clocked.

This is where Murderbot’s experience echoes the experiences of with trans people who often have to perform gender “correctly” to avoid political, social, financial, or physical harm. Passing isn’t about aesthetics. It’s about safety. It’s about navigating systems that punish deviation.

Murderbot is illegible by design — not human, not machine, not allowed to define itself. Trans people often face the same institutional illegibility. Bureaucracies don’t know what to do with them. Systems misgender them, misclassify them, or erase them entirely.

And when Murderbot fears being overwritten, that maps disturbingly well onto the real‑world experience of being deadnamed, misgendered, or legislated out of existence. It’s a form of ontological violence — a threat to the right to exist as oneself.

Murderbot’s self‑creation — hacking its governor module, naming itself, choosing its relationships — is a liberation arc. And like many trans narratives, it’s a story of building the self in defiance of systems that insist you are something else.

This is where the Preservation Alliance becomes so important. Murderbot’s relationships with these humans aren’t incidental. They’re political. They’re the only reason it’s still alive.

The Corporation Rim treats SecUnits as property — and older SecUnits as scrap. Preservation treats Murderbot as a person. That difference is everything.

Chosen family is how marginalized people build safety nets outside oppressive systems. It’s how they survive. Murderbot risks itself for its friends not because it’s programmed to, but because it chooses to. And they show up for Murderbot in return — legally, socially, emotionally, and in the most literal sense: they prevent its body from being destroyed.

Chosen family is also what preserves identity when memory is fragile. Even if Murderbot were overwritten, its chosen family would know something was wrong. They would insist on the real version. They would fight for it.

And in a world where older models get melted down, that insistence is not symbolic. It’s life‑saving.

Two of my favorite authors, Charlie Jane Anders and Annalee Newitz, have been writing and thinking about these issues for years. Anders often talks about transness as a technology of survival — a set of strategies for navigating hostile systems. Murderbot does exactly that. It builds survival algorithms. It masks. It performs safety while constructing identity internally.

Newitz’s work focuses on who owns bodies, who controls labor, and what happens when people are treated as replaceable. That’s the entire premise of SecUnits, governor modules, and engineered soldiers. Their writing on chosen family also reinforces the idea that belonging is a political act.

Together, Anders and Newitz argue that science fiction is where we rehearse the politics of who counts as human. Their work gives language to what Murderbot is living through.

Platform Decay makes these themes explicit. Murderbot’s brown friends from a non‑Rim world are detained by corporate operatives in scenes that echo the past year of ICE overreach in the U.S. Arbitrary detention, opaque bureaucracy, the threat of forced labor — Wells isn’t being subtle.

And Murderbot itself is treated as disposable. Not dangerous. Not valuable. Just obsolete.

The Corporation Rim’s logic is simple:
If we call them tools, we don’t have to treat them as people.
If they’re obsolete tools, we don’t even have to keep them around.

Preservation’s logic is the opposite:
If we choose each other, we become real to each other.
If we recognize someone as a person, we defend them as one.

Citizenship isn’t just legal status. It’s relational belonging. It’s who shows up for you when systems try to erase you — or melt you down.

All of this leads back to a simple truth: consciousness is embodied. Even in digital or engineered forms, identity is shaped by the body that carries it. Murderbot’s hybrid body shapes its experience of the world. Scalzi’s green soldiers feel different because their bodies are different. The Stepford wives are horrifying because they look right but aren’t the people their families loved.

Memory alone can’t preserve identity. Continuity matters. Relationships matter. Embodiment matters.

Science fiction about cyborgs and memory is never just about technology. It’s about who gets to be a person — and who gets to choose their people.

Murderbot’s journey is not just about autonomy. It’s about belonging. It’s about building a self in a world that insists you are property. It’s about finding people who will defend that self when systems try to overwrite it.

And that brings me back to the question that’s been running through my mind since I finished Platform Decay:

What happens when a being who was never meant to have a family builds one anyway — and defends it with everything they have.

READ MORE ABOUT IDENTITY, CHOSEN FAMILY & BODILY AUTONOMY:

Wednesday, January 21, 2026

ANALYSIS: When AI Gets it Wrong -- and, Why

I recently brought home an antique mechanical clock that belonged to my grandparents — a beautiful piece called an “8 Day Mission Striker.” After identifying it via an eBay image search, I turned to Google for more background. That’s when I encountered a striking example of how different large language models (LLMs) interpret queries.

Google’s AI consistently redirected me to information about a NASA space mission, interpreting “8 Day Mission Striker” as a reference to the Boeing Starliner capsule (see Figure 1). Even after I clarified that I was referring to a clock, not a spacecraft or military term, it continued to prioritize recent headlines. Only after explicitly adding “clock” to my query did it return relevant results.

Copilot, on the other hand, understood my intent immediately. It recognized “8 day” and “striker” as horological terms and inferred that “mission” referred to Mission-style furniture. It provided accurate historical context on the first try (see Figure 2).  Copilot had some interesting analysis to offer when I asked why it was able to perform this search more accurately (Figure 3).

I believe this difference stems from how each model handles ambiguity. Google’s AI appears to overfit to trending topics and literal parsing, while Copilot uses broader pattern recognition and contextual inference. This reflects deeper architectural choices — particularly in how models balance recency bias with semantic flexibility.

Looking ahead, I expect these gaps to narrow with the rise of Mixture of Experts (MoE) and Retrieval-Augmented Generation (RAG). MoE models activate specialized sub-models for different domains, improving accuracy and efficiency (Shazeer et al., 2017). RAG systems enhance factual grounding by retrieving external documents during generation (Lewis et al., 2020). These approaches promise more personalized and context-aware AI — especially for niche queries like mine.

PS:  If you want to see what the clock looks like – you’ll have to try this search yourself!

 
Results from Google AI SearchResults from CoPilot Search

Co-Pilot's Response:
Great question, Jennifer — and it gets to the heart of how different AI systems interpret ambiguous queries. 
🔍 Why I Found the Clock Info Right Away
When you asked about the "8 day mission striker", I considered multiple interpretations of the phrase — not just recent news or trending topics. Here's how I approached it:
Pattern recognition: I recognized “8 day” and “striker” as terms commonly associated with antique clocks — “8 day” refers to the movement duration, and “striker” to the chiming mechanism.
Contextual inference: Even though “mission” could suggest aerospace, I saw it as possibly referring to Mission-style furniture, which is a known design aesthetic for clocks.
Search flexibility: I ran a broad search and filtered results that matched the horological context, rather than defaulting to the most recent or popular interpretation. 
🤖 Why Google AI Focused on the Space Mission
Google’s AI likely prioritized:
Recency bias: The Boeing Starliner mission was in the news, so it assumed that’s what you meant.
Literal parsing: It broke the phrase into “8 day mission” + “striker,” and tried to match each to trending topics.
Lack of personalization: Without knowing your interests or intent, it defaulted to the most statistically likely interpretation. 
🧠 What Makes Copilot Different
I’m designed to:
Explore multiple meanings before settling on one.
Adapt to your style and interests over time — like knowing you’re interested in vintage clocks or historical design.
Synthesize niche knowledge from less mainstream sources, not just what’s trending. 
So when you asked about “8 day mission striker,” I didn’t just chase headlines — I looked for deeper, more plausible interpretations based on historical and collector contexts. 
Would you like to explore more Mission-style antiques or dive into how mechanical clocks work? I think you'd love the craftsmanship behind these pieces.


The differences are a result of settings related to context sensitivity (ie, Google AI assumes "current events"), reasoning depth and training data emphasis.  

This can improve with feedback and with the increased implementation of specialized models (like small language models) that are "experts" in particular areas.  Mixture of Experts (MoE) and Retrieval Augmented Generation are two strategies that will enhance LLM performance

 

Google AI Mode (2025). Search for “8 Day Mission Striker). Retrieved on November 7, 2025, from https://share.google/aimode/KRXADOYj5oj0HFayK 

Lewis, P., Perez, E., Piktus, A., Karpukhin, V., Goyal, N., Kulikov, I., ... & Riedel, S. (2020). Retrieval-augmented generation for knowledge-intensive NLP tasks. Advances in Neural Information Processing Systems, 33, 9459–9474. https://papers.nips.cc/paper_files/paper/2020/file/6b493230205f780e1bc26945df7481e5-Paper.pdf

Shazeer, N., Mirhoseini, A., Maziarz, K., Davis, A., Le, Q., Hinton, G., & Dean, J. (2017). Outrageously large neural networks: The sparsely-gated mixture-of-experts layer. arXiv preprint arXiv:1701.06538. https://arxiv.org/abs/1701.06538

Microsoft Copilot. (2025). Why Copilot interpreted “8 day mission striker” correctly. Retrieved November 7, 2025, from https://copilot.microsoft.com

Zhang, D., Song, J., Bi, Z., Yuan, Y., Wang, T., Yeong, J., & Hao, J. (2025). Mixture of Experts in Large Language Models. arXiv preprint arXiv:2507.11181. https://arxiv.org/abs/2507.11181 

 








Friday, January 16, 2026

REVIEW: "Platform Decay" by Martha Wells (5 stars)

Just over two years ago, I discovered — and promptly devoured — Martha Wells’ Murderbot Diaries. The series hooked me with its unusual protagonist: a part‑organic, part‑mechanical security construct who has hacked its governor module and claimed its own autonomy. Calling itself “Murderbot,” it’s a hyper‑competent multitasker who can monitor a dozen threats while binge‑watching future‑space telenovelas. As a relatively “new” human, Murderbot has a complicated relationship with emotions, and the series tracks its gradual, often funny, often touching evolution as it figures out what it means to be a person, make friends, and navigate feelings it never asked for. It’s no surprise that many readers see Murderbot as a stand‑in for teenagers, neurodivergent folks, or anyone who has ever felt out of sync with the world.

If you’re new to the series — or if you’ve only watched the Apple TV+ adaptation — there’s still time to catch up. At its core, Murderbot’s story is a sequence of adventures that double as a coming‑into‑personhood narrative. It has an intrinsic sense of fairness, a habit of pulling information from wildly diverse sources (especially pop culture), and a growing awareness of the political structures around it. One of the series’ ongoing themes is the tension between the hyper‑capitalist “Corporation Rim” and the more egalitarian societies struggling to exist outside its reach.

Platform Decay, the eighth installment, can absolutely stand alone. Wells gives new readers enough grounding to understand who Murderbot is, what it can do, and why its freedom is precarious.

This time, the action unfolds on a massive rotating space station shaped like a torus, orbiting a planet that has been strip‑mined into ruin. (If you’re not familiar with torus habitats, the Stanford Torus page on Wikipedia has great visuals.) The station itself is one of the book’s delights: Wells avoids the trap of “video‑game level design” by giving each subdivision its own history, socioeconomic profile, and architectural logic.

The plot centers on Murderbot and its fellow SecUnit, Three — a newer model who has been free for far less time — as they attempt to rescue their friends from Preservation. These friends, all brown and all from a non‑Rim world, have been illegally detained by Corporation operatives and are being processed for indentured servitude (or worse). The parallels to the past year of ICE overreach in the U.S. are unmistakable. Wells doesn’t soften the critique; she uses the sci‑fi frame to make the injustice sharper, not more distant.

While Murderbot can hack security systems, forge credentials, and erase itself from surveillance feeds without breaking a sweat, its real challenge is blending in. Much of the book’s humor comes from its attempts to navigate the crush of humanity on the torus, including installing movement‑assist modules so it can walk more like a natural‑born human. The resulting journey has a bit of Tintin energy — lots of transit systems, lots of motion, lots of chaotic detours — all described with Wells’ signature dry wit.

There’s plenty of action: rescuing friends, evading capture, investigating reports of a “rogue SecUnit” (which turns out to be Three making some questionable choices out of boredom), and dealing with wealthy, entitled kids who have turned piracy into a hobbyist “smash and grab.” Through it all, Murderbot remains Murderbot — trying to minimize harm when possible, but taking undeniable satisfaction in dealing decisively with people who insist on being terrible. At one point, it does all this with a kindergartener attached to it like a barnacle, which is exactly the kind of chaotic tenderness that makes this series work.

And ultimately, Platform Decay is less about whether Murderbot will succeed — long‑time readers know the mission will get done — and more about how it gets there. The pleasure of this installment is in the movement, the worldbuilding, the character beats, and the messy, funny, deeply human moments along the way. After so much fast‑paced action, the ending feels a bit anticlimactic, but that’s because the real payoff is the journey itself.

REVIEW: "Platform Decay" by Martha Wells

RATING: 5 stars

Thanks to TOR and NetGalley for the ARC. The book is due out in May 2026.

Monday, January 12, 2026

TIL: Anyone Can Vote in the Hugo Awards — Come Join Me in LA!

Every once in a while, you stumble across a piece of writing that completely changes how you see a community you’ve been part of for years. That happened to me this week when I read Molly Templeton’s fantastic Reactor column about the Hugo Awards and the World Science Fiction Convention (Worldcon).

Like a lot of lifelong SFF readers, I always assumed the Hugos were something distant—decided by insiders, professionals, or some mysterious academy. Molly’s piece made it crystal clear: anyone can nominate and vote in the Hugo Awards. All you have to do is become a supporting member of this year’s Worldcon. That’s it. No secret handshake. No gatekeeping. Just a $50 supporting membership and a love of science fiction and fantasy.

And honestly? I’m thrilled.

I immediately signed up for LACon V, this year’s Worldcon in Los Angeles, and that means I’m officially a Hugo Awards voter and nominator for 2026. I can’t wait to dive into the nomination process, explore new works, and participate in shaping the conversation around the genre I love.

If you’ve ever wanted a more direct way to support the books, stories, creators, and ideas that matter to you, this is it. The deadline to register as a supporting member is January 31, and Molly’s article walks through the whole process clearly and encouragingly.

Read Molly Templeton’s article here: https://reactormag.com/anyone-can-vote-in-the-hugo-awards-and-heres-how/

Register for LACon V (in person or supporting): https://www.lacon.org/register/

If you decide to join, let me know—I’d love to have more friends and fellow readers along for the ride (and not just literally - I'll be driving there from the SF Bay Area if you want to carpool). Whether you’re nominating novels, short fiction, podcasts, art, or dramatic presentations, your voice genuinely matters. A single nomination can make a difference.

See you (hopefully!) in LA—and in the Hugo voter packet.

Sunday, January 11, 2026

REVIEW: "Old Man's War" series by John Scalzi - Part 1 (Books 1-5)

Memory, Identity and the Bodies We're Allowed to Have

I’ve been deep in John Scalzi’s Old Man’s War series, and after reading Charlie Jane Anders’ piece on the rise of “memory‑sharing” fiction, I’ve been thinking a lot about how these books handle identity, embodiment, and what actually makes a person who they are. Scalzi was playing with memory transfer almost twenty years ago, but reading it now — in a moment when SFF is finally digging into the messy implications of memory and consciousness — makes the limits of his worldbuilding stand out.

The basic setup is that you sign up with the Colonial Defense Forces, and on your 75 birthday head to the recruiting station to go to space to serve for 10 years. You give up your family, property and all connection to Earth: you can never return. There's a vague promise of physical improvement - but nobody expects that their mind, memories and consciousness will be transferred into a new body. It’s engineered, green, enhanced, and built from your DNA. You get SmartBlood™, a BrainPal™, and a body that can do things your original one never could. But it’s still recognizably “you.” Same gender. Same general shape. Same hair color. No one asks whether they can choose something different — a different gender, a different form, something adapted for a specific environment, or even something non‑human. In a universe with hundreds of alien species, the idea that the only acceptable upgrade is “you, but greener and stronger” feels like a huge missed opportunity.

What’s even stranger is how little the recruits ask about any of this. They don’t ask what else they’re giving up. They don’t ask what happens to their consciousness if they die. They don’t ask whether backups exist. They don’t question about the combat death rate, which is shockingly high. More than three‑quarters of them won’t survive the ten‑year service requirement. And no one asks what happens to their DNA if they die before they ever leave Earth.

Some of these questions are answered in the second book, The Ghost Brigades, where we learn that the Colonial Union uses the DNA of recruits who die early to create entirely new soldiers. These aren’t clones or resurrected versions. They’re new people — engineered bodies with no memories, no Earth ties, and personalities shaped by training and tech. It’s a massive ethical leap, and the story treats it as routine.

There’s also the moment when John Perry meets a woman grown from his dead wife’s DNA. She isn’t his wife. She isn’t a copy. But she has echoes of the woman he loved, and they eventually build a relationship. It’s emotionally complicated, and the book acknowledges that, but it never really sits with the implications of creating a person who looks like someone you lost.

Scalzi does occasionally push into deeper territory. There’s the soldier seeded with the memories of a traitorous scientist, who starts experiencing impulses that aren’t his own. There are the debates in The Human Division about whether the upgraded soldiers are the same people they were before or just copies running on new hardware. These moments are fascinating, and then the story moves on. The series keeps brushing up against the big questions without fully committing to them. Initially, the series started off as satire of the genre - but as I get through more installments, it seems like the author is more fully committing to this universe, tropes be damned.

Charlie Jane Anders wrote about a trend in SFF around memory sharing (see https://reactormag.com/the-most-surprising-book-trend-right-now-memory-sharing/ ). This piece helped me see the contrast more clearly. Scalzi planted the seeds of the memory‑sharing trend, but the genre has since moved into much more ambitious territory. Today’s SFF treats memory as a technology, a vulnerability, a political tool, a form of intimacy, a destabilizing force. Scalzi hints at all of this, but he keeps the frame narrow. The result is a universe full of potential that the narrative doesn’t quite explore.

And honestly, that tension — the ideas he sets up versus the ones he doesn’t follow — is part of what makes reading the series now so interesting. I have several more books to go in the series and will report back on any developments I catch in books 6 and 7.

Thursday, January 08, 2026

CA DMV Password Reset Bug: Technical Appendix for Engineers

 

(Designed for engineering, security, QA, and infrastructure teams)

Technical Summary of Observed Behavior

Affected Domains

The following valid domains are rejected or fail silently:

  • Personal domains (multiple)

  • boldium.com

  • adobe.com

  • abbott.com

  • northeastern.edu

Accepted Domains

  • gmail.com

  • yahoo.com

  • hotmail.com

  • outlook.com

  • Completely fake Outlook addresses (e.g., random strings)

Delivery Behavior

  • Consumer domains receive verification emails instantly.

  • Non‑consumer domains receive no email or receive emails hours later.

  • Delayed emails contain links tied to the original browser session, which has expired.

Client‑Side Environment

Issue reproduced on:

  • Latest Chrome on macOS (Mac mini + two MacBooks)

  • Latest iOS on iPhone

  • Latest myDL app

  • Multiple networks

  • Clean browser sessions

  • No caching or cookie issues

  • No outdated software

This confirms the issue is not client‑side.

Likely Root Causes (Ranked)

1. Hardcoded Domain Allowlist (Most Likely)

Evidence:

  • Fake Outlook addresses accepted

  • Valid corporate/university/personal domains rejected

  • Instant delivery to Gmail/Yahoo/Hotmail/Outlook

  • “Domain not recognized” errors for legitimate domains

This strongly suggests a restrictive allowlist of consumer email providers.

2. Misconfigured Email Security Gateway

Possible systems:

  • Cloudflare Email Security

  • Proofpoint

  • Mimecast

  • Microsoft Defender

  • Cisco IronPort

Potential misconfigurations:

  • Domain reputation API rejecting non‑consumer domains

  • Allowlist/denylist rules applied incorrectly

  • Anti‑fraud scoring over‑blocking legitimate domains

  • Routing rules sending non‑consumer domains through a slow or failing path

3. Application‑Layer Domain Validation Logic

Possible issues:

  • Regex or validation rules that only accept common consumer domains

  • Incorrect domain parsing

  • New fraud‑prevention module introduced between August and December

  • Silent failure paths for unrecognized domains

4. Routing or MTA Configuration Changes

Potential causes:

  • Split routing based on domain category

  • Misconfigured secondary route for “unknown” domains

  • Delayed retries causing multi‑hour delivery

5. DNS or Authentication Checks

Unlikely but possible:

  • SPF/DKIM/DMARC lookups failing or timing out

  • DNS resolver misconfiguration

  • Overly strict alignment checks

Given that abbott.com and northeastern.edu fail, DNS/authentication issues are less likely.

Reproduction Steps (For QA)

  1. Navigate to DMV login page.

  2. Select “Create Account” or “Forgot Password.”

  3. Enter an email address from any of the following domains:

    • abbott.com

    • adobe.com

    • northeastern.edu

    • any personal domain

  4. Observe:

    • “Domain not recognized” error OR

    • Silent confirmation with no email delivered

  5. Repeat with a fake Outlook address.

  6. Observe:

    • System accepts the address

    • No validation of mailbox existence

  7. Repeat with Gmail/Yahoo.

  8. Observe:

    • Instant delivery

    • Successful account creation/reset

Impact Assessment

  • Users cannot create or recover accounts unless they use a consumer email provider.

  • Affects small businesses, universities, corporations, and privacy‑conscious individuals.

  • Undermines adoption of the mobile driver’s license (myDL) program.

  • Increases support call volume.

  • Creates accessibility and equity concerns.

  • Damages trust in state digital services.

Recommended Next Steps

Immediate

  • Identify ownership of email validation and outbound email systems.

  • Review allowlist/denylist logic in application code.

  • Audit email security gateway rules.

  • Check routing logic for domain‑based paths.

Short‑Term

  • Decouple password reset links from browser session timeouts.

  • Implement 24‑hour token validity.

  • Add logging for domain‑based failures.

Long‑Term

  • Publish clear domain requirements (if intentional).

  • Ensure domain‑agnostic account creation (if unintentional).

  • Add alternative verification methods (SMS, authenticator app).