Kia ora {{ First name | friend }},
We’re exploring the light and shadow of technology, with a focus on efficient AI workflows to free up your time.
This edition features:
Claude Cowork
ChatGPT Health
Tech prices ramping up
Meta’s quest for AI dominance
Ben Affleck & Matt Damon on AI
Thanks for being here.
Feature: Claude Cowork
Anthropic recently released Claude Cowork as a research preview.
Basically, if you’ve used a vibe coding tool such as Claude code, you’ll be familiar with the way this rolls. Claude can now act on your computer on behalf of you, to the level of organising files on your desktop (with access that you provide). It’s basically Claude code, but repacked for non-developers.
Through some research, non-technical users point out the biggest win is automating repetitive file work like sorting downloads, making spreadsheets from data, and generating organised folders of deliverables.
Here’s the benefits:
Actually useful for tedious work: It can organize messy download folders, turn receipt screenshots into expense spreadsheets, draft reports from scattered notes, and create polished documents with proper formatting.
Real autonomy: Unlike regular chat, you can give it a complex multi-step task and it'll work through it on its own without you babysitting every step.
Security sandboxing: It runs in a virtual machine environment, so files are mounted into a containerized space Substack separate from your main system.
Easy for non-technical users: No need to learn terminal commands or coding—just point it at a folder and describe what you want done.
Here’s the cons & limitations:
Major security vulnerabilities: A data exfiltration flaw reported in October 2025 for Claude Code remained unpatched when Cowork launched in January theregister. Attackers can hide malicious instructions in documents that trick Cowork into uploading your files to their account.
Prompt injection risks: Web content is a primary vector for attacks—malicious instructions can be hidden in websites, emails, or documents that Claude accesses. Anthropic openly admits "the chances of an attack are still non-zero"
Irreversible actions: Claude can permanently delete or overwrite files. There's no undo button.
Unrealistic user burden: Users are advised to monitor Claude for suspicious actions that may indicate prompt injection, which security experts call unfair for non-technical users.
Heavy usage consumption: Working on tasks with Cowork consumes more of your usage allocation than chatting with Claude itself. Particularly dangerous in early stage testing phases, as this can chew up your token quantity
Limited availability: Currently macOS only, requires Pro ($20/month) or Max ($100-200/month) subscription. Careful though, as this Reddit bot put it: “The overwhelming consensus is that while Cowork is a cool concept, it's practically unusable on the Pro plan because it absolutely incinerates your usage limits.”
No enterprise features: Cowork activity isn't captured in Audit Logs, Compliance API, or Data Exports and cannot be selectively limited by user or role, making it more difficult to integrate company-wide.
No memory between sessions: Can't learn your preferences or recall previous work… annoying for constantly having to fill it in.
So no reason to rush out the door and sign up for the $170/mo subscription… however the general innovation is an interesting one - more “vibe” technology (i.e send a prompt, sit back and watch it work) is on the way for non-technical users.

Giphy
Feature: Introducing ChatGPT Health… Yep.
OpenAI just announced ChatGPT Health. A dedicated AI system that basically offers personalised feedback to you with your health issues. On paper, yeah it sounds revolutionary. In practice, my argument is it could be one of the riskiest data experiments of our digital age.
So here’s the rub… 230 million people worldwide turn to ChatGPT with their most vulnerable questions. They ask about mysterious symptoms, lab results they don't understand, and health anxieties that keep them awake at night. Now, OpenAI wants those same people to hand over something far more valuable: their complete medical histories.
How it works
ChatGPT Health promises to “transform how we interact with our health information by connecting medical records, fitness trackers, and wellness apps into a single AI-powered interface”. Users can upload everything from hospital discharge summaries to Apple Health data, creating what the company calls a more "informed, prepared, and confident" healthcare experience. The system uses the aggregated information to provide personalized health insights, help interpret test results, and answer medical questions with context specific to the user's health profile.
Apparently, the service includes enhanced privacy features: conversations are encrypted, stored separately from other ChatGPT chats, isolated from the company's model training data, and kept in a compartmentalized system with dedicated memory that users can review or delete at any time. If someone begins a health-related conversation in the regular ChatGPT interface, the system suggests moving to the Health section for these additional protections.
The Downsides
Personally, I’m like “NO, THANKS” to this innovation. It feels invasive and long term risky. Did anyone else upload cringe photos in like 2010 and wish they hadn’t? Same. With the same wisdom applied, I wonder if we’ll look back in 15 years time and be like “Damn, I wish I didn’t give all of my most sensitive, human and biometric data to the AI overlords.”
While OpenAI promises not to use ChatGPT Health conversations for model training now, the company is accumulating an unprecedented repository of detailed, contextualized health information linked to individual profiles... It’s worth noting boldly that ChatGPT is bound only by its own policies and promises. Without meaningful regulatory oversight, the company can change its terms of service at any time. It’s like once you go there, you can never reallly “go back”.
This also gives OpenAI ridiculous power and commercial leverage - on your data. Medical records combined with daily wellness data, fitness tracking, nutritional information, and AI-generated health insights create an extraordinarily rich dataset. This information could reveal patterns about populations, disease progression, treatment effectiveness, and health behaviors at scales never before possible. The commercial value of such data is immense—for pharmaceutical companies, insurers, researchers, and countless other entities.
There’s a few other issues afoot, each worth their own article, but listed here briefly:
Lawfulness around data privacy
Fragmentation of data security in sharing with 3rd party apps
No clarity about how OpenAI handles requests from authorities
Hallunication in predicting health issues
Automating away the human connection
I, for one, will choose not to partake… Anyone else feel like we’re living Black Mirror?

Here’s a few interesting technology updates:
Tech storage is getting way more expensive due to AI - things like hard drives, SD cards, RAM for your computer, are all gonna get more pricey in years to come. Short version is: all the big tech boys are buying up the materials required for computational memory, restricting supply for manufacturers who sell consumer goods, driving up prices for the raw materials. Expect 1-2 years of price hikes until the supply meets demand… Read more.
Meta just announced Meta Compute, committing $600B+ to developing infrastructure (tens of gigawatts of capacity) to service future AI developments. Zuckerburg and the boys are going “all in” on a new level. Read more.
Microsoft is trying be a “good neighbour” to communities, offering to cover it’s water and electricity uses in small towns near data centres that are in uproar about the AI infrastructure push. Check it out.
ElevanLabs dropped ScribeV2, which seems to be a seriously powerful speech-to-text transcription. It’s pre-loaded with AI terminology such as “LLM”, and “GPT-4o” etc, and also handles multi-lingual use cases very efficiently. Check it out.
Qwen Image Edit 2.5 has made it possible to get other 3D angles of your image using AI. Basically you can rotate a “camera” around the subject of your image and AI will imagine it’s sides/back/top etc. Try it here.
Matthew McConaughey has trademarked himself to avoid deepfakes, securing 8 patents with the US trademark office. Interesting move as he’s also an investor in ElevenLabs, arguably the best text-to-voice AI generator on the market. Read more.
Meta’s SAM Audio can reduce any background noise from video. This is already an awesome innovation for video creators, being able to isolate “bad audio” sounds such as wind noise, cars in the background, to get cleaner audio. It may also result in more focus, such as being able to block out the sound of a noisy cafe so you can just isolate the voice of the person you’re with. Watch the video.
Amazon launched Alexa.com, brining AI-infused Alexa assistant into the browser, competing for dominance alongside ChatGPT, Gemini, Claude and Grok. Have a look.
Meta announced the acquisition of AI agent startup Manus for a reported figure of over $2B, adding a top-performing agentic system and revenue-generating product to its aggressive AI expansion. Read more.
Ben Affleck & Matt Damon on AI with Joe Rogan
Good take from the big boy actors on AI. They’re basically saying this:
AI is useful, but it’s not magic, and it’s definitely not a replacement for people. Worth a watch, particularly Ben Affleck’s part, he is a very intelligent man.
{{ First name | friend }}, thanks for dropping in again.
Forward this to a friend if you found it useful.
Stay human,
Billy
