Featured

Parsing Script

Spotter Data Parsing Script

Sofar’s internal open-source parsing script can help you analyze your Spotter’s SD card data. The script will help associate your data with a human-readable timestamp and translate the spectral data into bulk parameters.

Please note: The parsing script is not an actively supported tool and we recommend that customers build their ingesting tools using the raw Spotter SD card data. Further information on the meaning of the data in the SD card files can be found in the Spotter SD Card Data Guide.

Computer requirements

  • Python 3 or higher
  • Python modules installed: pandas, numpy, scipy

Download the latest parsing script

Access the latest parsing script here.

Further information on how the parser script works can be found in the comments at the top of the file and in the README.

Includes:

  • Ability to parse data from Spotter and Smart Mooring devices running firmware v2.0 and higher.
  • Ability to parse data from Spotter and Smart Mooring devices running firmware v1.12.0 and lower.

Using the parsing script

  • Download the parsing script onto the computer you will use to retrieve Spotter SD card data.
  • Unzip the file.
  • Be sure you have the following python modules installed: pandas, numpy, scipy. If Python is already installed, this can usually be accomplished by running the following in a Terminal or shell window: pip3 install --user pandas numpy scipy
  • Copy the Spotter files to your computer.
  • Copy the parsing script into the directory where you copied the Spotter files.
  • Open a terminal window at the directory from the previous step and run:
    python3 sd_file_parser.py
  • The parser script should find the Spotter files in its current directory and operate on them.

Optional command-line parameters

You can use the following command-line parameters to customize the operation and output of the parser.

outputFileType

Specify additional file types to be output. For example, outputFileType=matlab will output Matlab-formatted files.

spectra

By default, the script will only produce the variance density spectrum. If in addition the directional moments are desired, add the command line switch spectra='all' , i.e.:

python3 sd_file_parser.py spectra='all'

…in which case files containing a1,b1,a2,b2 (in separate files) will be produced.

Featured

Generative AI: Predicting Future

What is the huss all about?
Generative AI refers to artificial intelligence systems that are able to generate new content or data that is similar to a training dataset. This can include generating text, images, music, or other types of data.

Generative AI systems use a variety of techniques, such as deep learning, to learn the patterns and structures present in the training data and then use that knowledge to generate new, original content that is similar in style or content to the training data. These systems can be used for a wide range of applications, including language translation, image generation, and music composition.

Some examples of generative AI include language translation models that can translate text from one language to another, image generation models that can create new images based on a given set of inputs, and music generation models that can create original compositions based on a set of musical styles or genres.



How does it work?
Generative AI systems work by learning the patterns and structures present in a training dataset, and then using that knowledge to generate new, original content that is similar in style or content to the training data.

There are a number of ways that can be used to build generative AI systems, including deep learning, which involves training a neural network on a large dataset and then using the learned patterns to generate new content.

To train a generative AI model, the model is typically fed a large dataset of training examples. The model then analyzes the dataset and learns the patterns and structures present in the data. Once the model has learned these patterns, it can use that knowledge to generate new, original content that is similar to the training data.

For example, if a generative AI model is trained on a dataset of images of animals, it might learn to recognize patterns such as the shape of an animal’s head, the color of its fur, and the way it moves. Once the model has learned these patterns, it can generate new images of animals that are similar to the training data but also unique and original.

There are many different approaches to building generative AI systems, and the specific techniques used will depend on the type of data being generated and the goals of the model.

There are several types of generative AI systems, including:

  1. Autoregressive models: These models generate new data by predicting the next value in a sequence based on the previous values. For example, an autoregressive model might be used to generate new text by predicting the next word in a sentence based on the previous words.
  2. Generative adversarial networks (GANs): These models consist of two neural networks: a generator and a discriminator. The generator generates new data, while the discriminator tries to distinguish the generated data from real data. The generator and discriminator are trained together, with the generator trying to produce data that is difficult for the discriminator to identify as fake, and the discriminator trying to accurately identify fake data.
  3. Variational autoencoders (VAEs): These models consist of an encoder and a decoder. The encoder takes in data and maps it to a latent space, while the decoder takes a latent representation and generates new data. VAEs can be used to generate new data that is similar to the training data, as well as to perform tasks such as data compression and denoising.
  4. Normalizing flow models: These models use a series of invertible transformations to map data from a simple distribution (such as a standard normal distribution) to a more complex distribution (such as the distribution of the training data). Normalizing flow models can be used to generate new data that is similar to the training data.

These are just a few examples of the types of generative AI systems that exist. There are many other approaches to building generative AI models, and the specific techniques used will depend on the type of data being generated and the goals of the model.

How do we train a dataset for AI applications?

To train a generative AI model, you will need a training dataset that contains examples of the type of data that you want the model to generate. For example, if you want to train a generative AI model to generate images, you will need a dataset of images.

To train the model, you will typically follow these steps:

  1. Preprocess the training data: This might involve cleaning the data, formatting it in a specific way, or performing other types of preprocessing to make it suitable for training.
  2. Split the training data into a training set and a validation set: The training set is used to train the model, while the validation set is used to evaluate the model’s performance during training.
  3. Choose a model architecture and hyperparameters: The model architecture refers to the structure of the model, including the number and size of layers, the type of activation functions used, and other details. The hyperparameters are values that are set before training, such as the learning rate and the batch size.
  4. Train the model: This involves feeding the training data to the model and using an optimization algorithm to adjust the model’s weights and biases so that it can learn to generate data that is similar to the training data.
  5. Evaluate the model on the validation set: This involves using the model to generate data and comparing the generated data to the validation data to see how well the model is performing.
  6. Fine-tune the model: If the model’s performance is not satisfactory, you may need to adjust the model architecture, hyperparameters, or other aspects of the model to improve its performance. This process is known as fine-tuning.

Once the model is trained and fine-tuned, you can use it to generate new, original data that is similar to the training data.



Featured

ChatGPT vs Google Search Engine

Can ChatGPT kill Google Search Engine monopoly?

It’s highly unlikely that GPT-3 or any other chatbot could kill Google’s search engine. While GPT-3 is a very advanced language processing model, it is not designed to replace search engines like Google. GPT-3 is intended to assist users in generating human-like text based on a given prompt, but it is not capable of the advanced indexing and search algorithms that Google’s search engine uses to quickly and accurately find information on the internet. In short, GPT-3 and other chatbots are not a threat to Google or other search engines.


Chatbots and their history?

Chatbots, also known as conversational agents, are computer programs designed to simulate conversation with human users. They can be integrated into a variety of platforms, including messaging apps, websites, and mobile apps, to provide users with quick and convenient access to information or services.

The history of chatbots can be traced back to the 1950s, when researchers began experimenting with computer programs that could simulate conversation with human users. One of the earliest examples of a chatbot was ELIZA, a program developed at MIT in the 1960s that could mimic the responses of a psychotherapist in a simple, text-based conversation.

Since then, chatbots have evolved significantly, with advances in artificial intelligence and natural language processing allowing them to become more sophisticated and human-like in their interactions with users. Today, chatbots are used in a wide range of applications, from customer service and e-commerce to entertainment and education.


How does Google Search Engine work?

Google’s search engine uses a complex algorithm to search the internet and return the most relevant results for a given query. When a user enters a search query, Google’s algorithm uses advanced indexing and crawling techniques to find pages on the internet that are related to the query. It then ranks the pages based on a number of factors, such as the relevance of the content and the number of other websites that link to the page.

The exact details of Google’s algorithm are a closely guarded secret, but we do know that it takes into account hundreds of factors when ranking pages, including the relevance and quality of the content, the user’s location and search history, and the popularity of the website.

Once the algorithm has ranked the pages, it returns a list of results to the user, with the most relevant and useful results appearing at the top of the page. This allows users to quickly and easily find the information they are looking for, making Google’s search engine one of the most powerful and widely used tools on the internet.

Featured

Chrome: Gateway to Internet

Google Chrome is a popular web browser that offers a fast, simple, and secure browsing experience for users. Developed by Google, Chrome is available for Windows, Mac, Linux, Android, and iOS operating systems.

Chrome is its speed.
The browser uses Google’s own V8 JavaScript engine, which is specifically designed to quickly load and execute web pages across milliseconds. This means that web pages load faster on Chrome, which can save users time when browsing the internet, solving user pain point, defining adoption on efficiency.

Chrome is Simple
The browser has a clean, user-friendly interface that makes it easy for users to navigate and find the information they need. Chrome also offers a variety of useful tools, such as the ability to create and manage bookmarks, as well as a password manager that can securely save and autofill your login information, product for everyone !

Chrome is secure
The browser uses Google’s Safe Browsing technology to protect users from malware and phishing attacks. It also offers Incognito mode, which allows users to browse the internet privately without leaving any trace of their activity on their device, assuring safety !

In addition to these core features, Chrome also offers a wide range of extensions that can add extra functionality to the browser. These extensions can do things like block ads, check the weather, or even help you save money when shopping online, Enhancement for all, gateway for Creators !

Google Chrome is a versatile and powerful web browser that offers users a fast, simple, and secure browsing experience. Whether you’re a casual internet user or a power user, Chrome has the tools and features you need to make the most of your time online, Perfect Brand sitting on top for search engine industry, Search with Google

Exceptionalism, Education, and the Blind Spots of Power

How history education shapes political imagination—and what happens when it goes unchallenged

Public debates about geopolitics often reveal less about foreign policy itself and more about how people have been taught to understand the world. Few places make this more obvious than online discussions about the role of the United States in global affairs—particularly when claims of moral authority, military dominance, or historical inevitability go largely unquestioned.

A recent discussion illustrates how these assumptions don’t emerge from nowhere. They are cultivated.


The limits of “limited bandwidth”

Formal schooling has finite time and attention. Nearly every country faces hard choices about what to include in compulsory history education. In practice, this often means emphasizing a few large, identity-forming narratives: national origins, wars of survival, and defining victories.

In the US context, this has typically meant a heavy focus on the Second World War—often framed as a moral triumph in which America “saved the world.” While the importance of US industrial power and military involvement is undeniable, this framing frequently downplays the contributions and sacrifices of other Allied nations and glosses over uncomfortable realities such as domestic fascist sympathy, segregation, and political repression at home.

Once World War II ends, history classes often rush through the remainder of the 20th century. Institutions like NATO, the Cold War, and conflicts in Korea and Vietnam are compressed into a few hurried lessons, stripped of nuance and global context.

The result is not necessarily ignorance of facts—but a lack of proportionality.


Exceptionalism as curriculum, not accident

When education repeatedly reinforces the idea that one nation is the central pillar of modern history, a subtle message takes root: global stability depends on that country’s will. Over time, this becomes less a political opinion than a default assumption.

Add to this the legacy of Manifest Destiny—the belief that expansion and dominance are not merely strategic choices but moral imperatives—and the outcome is predictable. International alliances are reframed as favors. Military coalitions become evidence of benevolence rather than mutual obligation. Other countries’ sovereignty becomes conditional.

This worldview helps explain why some Americans are genuinely unaware that allies like Denmark suffered proportionally similar losses alongside US forces in Iraq and Afghanistan, or that the collective defense clause of NATO has only ever been formally invoked once—by the United States itself.


Not everyone absorbs this narrative uncritically. Many people describe their political awakening as happening despite formal education, not because of it: through anti-war music, independent reading, university study, or even exposure to media they initially disagreed with.

Others—particularly those educated in ideologically insulated environments such as rigid homeschooling systems—describe the process as unlearning rather than learning. The emotional residue of discovering how wrong one once was often includes embarrassment, anger, and a sense of betrayal.

But it also includes growth.


Interestingly, similar “bandwidth” constraints exist outside the US. In places like Scotland, post-war global history is often minimally covered in compulsory schooling, with deeper analysis reserved for higher education. The difference lies less in what is omitted and more in what is implied: national humility versus national indispensability.

When history is taught as a set of tools for thinking—rather than a mythos to be defended—students are more likely to recognize complexity, shared responsibility, and mutual dependence.


When public discourse treats international cooperation as optional gratitude rather than shared commitment, the consequences are real. It becomes easier to imagine coercion as diplomacy, invasion as inevitability, and alliances as hierarchical rather than reciprocal.

Education alone won’t fix this. But acknowledging how narratives are constructed—and whose perspectives are minimized—is a necessary start.

History does not belong to one country. And the sooner more people are taught that, the healthier global politics will be.

Big Tech’s 2026 Reality Check: AI Reshapes Priorities at Apple, Meta, and Anthropic

By early 2026, one truth is becoming unavoidable across Silicon Valley: artificial intelligence is no longer an experimental add-on. It is the organizing principle behind strategy, spending, and—even more telling—cutbacks.

This week’s developments at Apple, Meta, and Anthropic highlight how decisively the industry is reallocating capital, talent, and attention toward AI, even when it means retreating from once-grand visions.

Apple Bets on Integration Over Invention

Apple’s long-awaited Siri overhaul finally has a path forward, and the solution is pragmatic rather than ideological. By signing a multi-year agreement to power Siri with Google’s Gemini models—alongside Apple’s own foundation models—the company is conceding what many observers already suspected: Apple doesn’t need to win the AI model race to win the AI era.

Instead, Apple is playing to its historical strength—distribution. With nearly 2.5 billion active devices worldwide, Apple’s advantage lies in quietly embedding AI into everyday workflows rather than forcing users into chatbot-first interactions. Features like message prioritization, photo cleanup, and contextual intelligence already hint at this philosophy.

Siri’s struggles over the past decade—from being an early pioneer in 2011 to falling behind Alexa, Google Assistant, and ChatGPT—became symbolic of Apple’s perceived AI lag. The delayed rollout of Apple Intelligence in 2025 only reinforced that narrative. But the Gemini partnership reframes the conversation: Apple is optimizing for reliability and usability, not model supremacy.

In a world where fewer than a quarter of the global population will be regular AI users by the end of the decade, that restraint may prove to be an advantage.

Anthropic Pushes AI Deeper Into Healthcare—Carefully

If Apple’s story is about distribution, Anthropic’s latest move is about trust.

With the launch of Claude for Healthcare, Anthropic is positioning itself as the privacy-first alternative in one of AI’s most sensitive domains. The platform allows providers and patients to use AI for tasks like reviewing insurance claims, summarizing health records, and triaging messages—while emphasizing explicit consent and data isolation.

The timing is notable. OpenAI recently introduced ChatGPT Health and followed it with an acquisition aimed at better ingesting medical records. Meanwhile, Nvidia, Microsoft, and pharmaceutical giants are pouring billions into AI-driven drug discovery.

Yet healthcare remains a minefield for generative AI. Hallucinations, overconfidence, and training-data leakage are not abstract risks when medical decisions are involved. Anthropic’s framing—AI as a “second opinion,” not a replacement for doctors—is an explicit attempt to draw a line between assistance and authority.

With tens of millions of people already using AI tools for informal healthcare advice—and millions more lacking access to adequate care—the industry faces mounting pressure to balance innovation with restraint.

Meta’s Pivot From Metaverse Dreams to AI Infrastructure

Perhaps the starkest signal of AI’s dominance comes from Meta.

Reports that the company is preparing to lay off roughly 10% of its Reality Labs division underscore how dramatically priorities have shifted. Just a few years ago, the metaverse was positioned as Meta’s future, driving its rebrand and massive VR investments. Today, those ambitions are being scaled back to fund something far more immediate: AI compute.

Meta’s newly announced “Meta Compute” initiative aims to build tens—and eventually hundreds—of gigawatts of computing capacity. This follows aggressive moves into nuclear energy partnerships designed to secure long-term power supply.

The strategy appears to go beyond training better models. Analysts increasingly see compute and energy as a new form of strategic currency—assets that can be leased, sold, or weaponized competitively. Meta could ultimately emerge not just as an AI developer, but as an infrastructure provider, putting it in direct competition with hyperscalers and emerging neocloud players.

Ironically, Reality Labs may still play a role in Meta’s AI future. Smart glasses like Meta Ray-Bans remain a potential mass-market interface for AI, even as VR headsets fade from the spotlight. The vision hasn’t disappeared—it’s just been subordinated.

The Bigger Picture: AI as the New Gravity

Across these stories runs a common thread: AI is no longer competing with other priorities. It is absorbing them.

Voice assistants, healthcare tools, data centers, energy strategy, hardware roadmaps—everything now bends toward AI enablement. Even ambitious projects like the metaverse are being reevaluated through that lens.

The companies that succeed in 2026 and beyond may not be the ones with the most advanced models, but those that best align infrastructure, trust, and user experience around AI’s real-world deployment.

The era of experimentation is ending. The era of consolidation has begun.

From Prompt → PRD → PROMPT.md → Warp: AI-Native Build Loop

Alright, so here’s how I build projects these days. It’s half prompt engineering, half product design, and half automation sorcery. (Yes, that’s three halves. Welcome to modern dev.)

🧩 Step 1: Turn the idea into a PRD

Every project starts with a single line in ChatGPT Pro. Something like:

“Build an LSP for Strudel files that includes autocomplete and diagnostics.”

That “initial prompt” goes through a 10-step pipeline that spits out a Product Requirements Document (PRD). It’s not fancy, just structured:

  1. Normalize intent (who/what/why/constraints).
  2. Fetch related context (past tickets, metrics, etc.).
  3. Define outcomes and KPIs.
  4. Identify users and scenarios.
  5. Outline scope/non-goals.
  6. Sketch UX flows.
  7. Write functional requirements (Given/When/Then).
  8. Add non-functional reqs (SLOs, reliability, cost).
  9. Design rollout and experiment gates.
  10. Log risks, decisions, and open questions.

The result is a clean, review-ready PRD in markdown: the “human contract” for the project.

🤖 Step 2: Generate the (the machine contract)

Once the PRD is solid, I feed it into ChatGPT to generate a PROMPT.md file — basically the machine-readable version of the spec.

It’s got:

---
prompt_name: <feature>-agent
model: gpt-4o
fallback_models: [claude-opus, gpt-4o-mini-high]
tags: [prd-derived, agentic, production-ready]
---

Then sections like:

  • SYSTEM – defines the agent’s role and tone.
  • CONTEXT – condensed PRD details.
  • TASK – numbered objectives.
  • CONSTRAINTS – guardrails and safety checks.
  • ACCEPTANCE TESTS – from the PRD.

That file tells the AI how to work, what to output, what “done” means, and how to self-check without hallucinating its reasoning. It’s the bridge between documentation and orchestration.

⚙️ Step 3: Drop both into Warp and hit go

I upload both the PRD.md and PROMPT.md into the repo, then tell Warp:

“Build this project according to these two files and my global rules.”

The Warp agent evaluates the PRD and PROMPT.md, drafts a multistage plan, and shows me the steps. I can approverevise, or deny each one. Once approved, it scaffolds the repo, generates a task list, and starts executing.

🧪 Step 4: Iterative build, not one-shot delusion

Look, I don’t believe in “one-shotting.” Software design principles and sane engineering practice preclude me from such delusions. Real systems are iterative, test-driven, and full of tradeoffs.

That said… this setup is the closest I’ve ever gotten to feeling like I one-shotted a project. Warp ingests the PRD, reads the PROMPT.md like scripture, and starts building in verifiable steps. I still guide it, but it gets shockingly close to “prompt-to-product.”

🧠 Step 5: How the agent actually builds

It runs a tight loop:

  1. Validate PRD and PROMPT structure.
  2. Decompose acceptance criteria into testable tasks.
  3. Write failing tests first (TDD).
  4. Implement minimal code to pass.
  5. Lint → typecheck → test → print results.
  6. Commit with Conventional Commits (multi-line, meaningful).
  7. Block merge if gates or tests fail.
  8. Open PR linking PRD for human review.

Everything is transparent, logged, and traceable. And I can still step in mid-build, request revisions, or provide updated constraints.

🔒 Step 6: Hygiene and exclusions

Global rule: the PRD, PROMPT.md, and WARP.md all live in the repo but are excluded from git (.git/info/exclude). That keeps the scaffolding logic private while still versioning the actual deliverables.

🚀 The punchline

The whole setup’s basically a handshake between what we want and what the machine knows how to do:

  • PRD.md — the human side: clarity, scope, purpose.
  • PROMPT.md — the machine side: instructions, guardrails, tests.
  • Warp — the executor that translates both into working code.

You’re not hitting a magic button here. You’re setting up a loop you can trust, where humans lay out the context and the AI builds from the ground up.

It’s as close to “push-button engineering” as I’m ever gonna get, and I’ll take it.

If you’re running similar prompt-to-PRD-to-code loops (Warp, Claude, Codex, MCP, Obsidian, whatever), drop your setup. Always curious how others are taming the chaos.

EMR Bloat Is Costing Us: Time, Money—and Patient Trust

Highlights:

  • 95% of EMR (electronic medical record) content is noise—not useful to clinicians
  • Cloned notes, boilerplate assessments, CPT-driven charting dominate clinical documentation
  • “Garbage data” in EMRs erects barriers to care, fuels physician burnout, and hampers AI adoption

The EMR Breakdown: Quantity over Quality

Dr. John Asghar, a spine surgeon, recounts reviewing 60‑page charts from a 24‑hour stay—only 4–5 pages offered any clinical value. He’s far from alone:

  • Leah Houston, MD blamed HIPAA, HITECH, and TEFCA for turning meaningful records into billing fodder
  • Other clinicians lament that 95% of EMR content is irrelevant noise
  • Malpractice reviewers can often distill hundreds of pages into a half‑page summary

This “note bloat” adds time—yet subtracts from safety, continuity, and trust in care. And when critical insights lie buried under auto-populated vitals, checkbox templates, and copied-paste histories, both patients and clinicians pay.


The Hidden Costs: Burnout, Errors, and Inflation

Garbage In, Garbage Out: AI in healthcare depends on quality data. As one commenter stated, “garbage data is a limiting factor for the use of AI”—and more importantly, “garbage data is expensive and increases the likelihood for error.”

Healthcare institutions spend millions on storage, auditing, and defending massive chart dumps—though regulatory liability, not clinical value, often drives documentation length. The fallout? Clinicians spend up to 35% of their day on documentation tasks, fueling burnout and diverting attention from healing.


AI Isn’t a Magic Fix—But It Can Help (If Data Improves)

Indeed, EMR bloat has throttled AI’s upside. Despite enthusiasm around LLM-powered summarization and speech-transcribed note generation, performance craters with low-quality inputs. The adage “garbage in, garbage out” stands strong—echoed in both industry debates and academic papers.

Yet, solutions are emerging. Research in journals like arXiv shows promise for AI‑driven “intelligent clinical documentation” that auto-generates SOAP and BIRP notes from conversations—potentially saving clinicians time while improving note utility.


Toward Lean, Patient-Centric Documentation

Here’s how organizations can right-size EMR content:

  1. Prioritize: Require only assessment, plan, and critical diagnostics in chart summaries.
  2. Roll back copy/paste templates: Enforce audit-flagging to reduce boilerplate misuse.
  3. Adopt generative AI tools: Use voice-to-text and summarization to craft concise clinical notes.
  4. Align policy with purpose: Advocate for streamlined CMS and accreditation documentation guidelines.
  5. Equip clinicians: Train staff to trim irrelevant data and spotlight decision-making rationale.

The Stakes: Reclaiming Clinical Time & Trust

Fixing EMR bloat isn’t just administrative housekeeping—it’s core to healthcare’s future. It reduces clinician fatigue, improves patient understanding, lowers misdiagnosis, and unlocks AI’s real potential.

As Dr. Asghar warned: “garbage data is expensive and increases the likelihood for error.” In a world where every minute matters, cutting through the clutter is both a financial imperative and a moral one.


Why This Matters

For healthcare leaders, this moment is an inflection point. Pressure from policy-makers, tech vendors, clinicians, and patients converges to demand a shift from documentation overload to clinical precision.

Will EMRs evolve into efficient clinical allies—or remain cumbersome relics? Success hinges on streamlining data, embracing AI responsibly, and putting patient care—not billing—at the center.


Bottom Line: It’s time to prune the EMR. The savings—time, dollars, and trust—are too significant to ignore.

Epic UGM 2025: When EHRs Stopped Being Just Records

For two decades, Electronic Health Records (EHRs) have been the necessary evil of healthcare. Essential? Absolutely. Beloved? Almost never.

Doctors saw them as digital filing cabinets that turned healers into typists. Patients barely noticed them, except when a nurse squinted at a screen instead of looking them in the eye.

But at Epic’s User Group Meeting 2025 in Verona, Wisconsin, one thing was obvious: the EHR is no longer being sold as software. It’s being sold as a relationship platform — powered by AI.


1. Art & Emmie: The New Healthcare Duo

Epic’s new duo of AI copilots stole the show:

Art (for clinicians)

  • Keeps the visit on track, pacing conversations against the patient’s pre-visit agenda.
  • Pulls in insights from Cosmos (Epic’s 300M-patient dataset).
  • Captures new info (like family history) in real time.
  • Queues up orders and puts them in a “shopping cart” for final review.
  • Will draft the clinical note (ambient scribe) starting early 2026.

Emmie (for patients)

  • Reaches out before the visit to ask, “What’s on your mind?”
  • Drafts an agenda for the doctor-patient conversation.
  • Explains results after the visit in plain language.
  • Sends reminders for meds and follow-ups.
  • By Feb 2026, Emmie will also manage a centralized to-do list for preventive care.

This isn’t just workflow automation. It’s Epic betting that the future of healthcare is collaborative AI — a feedback loop where both sides (patient + provider) are supported, guided, and nudged by agents.


2. The Microsoft Alliance: Ambient Wars Begin

When Microsoft’s Joe Petro took the stage, the subtext was clear: Epic isn’t dabbling in AI charting. It’s bringing in Microsoft Dragon Copilot — a proven ambient AI system — to power its note-drafting.

  • Launch window: Draft notes in early 2026. Insurance checks and decision-support in late 2026.
  • Impact: Competing startups (Abridge, Suki, others) just got a wake-up call. Epic + Microsoft is not a partnership — it’s a distribution nuke.

This is no longer about who has the better AI transcript. It’s about who controls the workflow.


3. Cosmos: The Predictive Engine

Epic’s Cosmos dataset — now at 300M patients and 16B encounters — is moving from retrospective analysis to predictive intelligence.

New features teased:

  • Length of stay predictions → scan similar patients to estimate discharge dates.
  • Recovery outcomes → forecast when a patient might return to work or activity.
  • AI-driven free-text search → coming Nov 2025, clinicians can query “why was this med changed?” and get an answer from years of notes.
  • Wound measurement AI → snap a photo, get auto-calculated wound size.
  • Virtual dual sign-off → a “video nurse” confirms high-risk medication dosing.

One Epic presenter, dressed in Star Trek gear, joked: “I love this because I’m a doctor, not a fortune teller.” Cosmos is trying to make them both.


4. Governments Buy In: Public Health as Infrastructure

Judy Faulkner said it flatly: “You can’t measure the health of a population without a unified record.”

Epic is turning that thesis into contracts:

  • Washington state → first U.S. state to adopt Epic statewide.
  • Singapore & Northern Ireland → already live across entire countries.
  • Rural hospitals → connected via Epic’s Community Connect and shared state instances.

This is less about EHRs and more about governments treating data systems like roads or electricity. Epic wants to be the utility provider.


5. ROI: Because AI Alone Isn’t Enough

Faulkner acknowledged what every CFO in the audience was thinking: hospitals are broke.

Epic’s counteroffer: tools that pay for themselves.

  • Penny (AI revenue cycle assistant): codes, writes appeal letters.
  • Cost Reduction Dashboard: find savings.
  • Quarterly Executive Packets: benchmark your system vs peers.
  • Epic Dashboard: monitor performance of your org like a stock portfolio.

The subtext: AI that doesn’t bend the cost curve is just hype.


6. The Rest of the UGM Headlines

  • MyChart Central (Nov 2025): One login for all MyChart accounts. Patients stop juggling passwords.
  • Clinical Trial Management System (early 2026): Epic moves into research workflows.
  • AI Governance: Cosmos AI as a “check and balance” against black-box models.
  • Honor Roll Expansion: new tiers, including for MyChart excellence.
  • Market Share: Epic gained 48 new orgs this year — 17 of them ripped from Oracle.

And, in case anyone doubted Epic’s flair: Judy Faulkner entered the stage in a shiny vest, purple wig, and silver pants — a nod to sci-fi futures now becoming product roadmaps.


The Big Picture

Epic is no longer just an EHR vendor. It is positioning itself as:

  • Infrastructure layer (records, billing, clinical workflows).
  • AI layer (Art + Emmie, Penny, Cosmos).
  • Public health layer (statewide & nationwide integrations).

The pivot is clear: 📌 From “system of record” → to “system of intelligence.”


The Real Test

The question isn’t whether Epic can launch 190 AI tools. It’s this:

Will the doctor at 10 p.m., staring at her last patient chart, finally feel the difference?Because if Epic gets that right — the burden lifted, the work made lighter, the relationship restored — then UGM 2025 won’t just be remembered for the sci-fi costumes.

It’ll be remembered as the year the EHR grew up.

Healthcare doesn’t need more data. It needs more time. Epic is betting AI can buy it back.

The $100 Billion Mistake: How EMRs Became Billing Engines, Not Clinical Tools

Highlights

  • John Asghar MD saw a 60-page, 24-hour chart with only 4–5 pages useful.
  • Leah Houston MD and others blame HIPAA, HITECH, TEFCA for “billing fodder.”
  • The U.S. spends $100 billion+ annually storing data clinicians never use.

Problem Statement

When Dr. John Asghar, an adult and pediatric spine surgeon, opened a 60-page discharge summary for a patient who’d been in the hospital just 24 hours, he found only four or five pages that actually mattered. The rest—redundant vitals, copied histories, billing codes—was noise mandated by layers of regulation and insurance requirements. As Asghar’s tweet storm (71.6 K views) ignited, clinicians across disciplines piled on: from Leah Houston MD blaming HIPAA, HITECH, and TEFCA for turning notes into billing ledgers, to Bill S declaring “garbage data” the single biggest roadblock to AI in medicine.


From Patient Story to Invoice

  • The SOAP-to-Checkbox Shift
  • Regulatory Ripple Effects

Clinicians now navigate menus of up to 200 fields per note, each with the force of compliance. The result: a medical record that reads more like a balance sheet than a healing roadmap.


The True Costs of Bloat

  • Financial Overhead:
  • Clinician Burnout:
  • Patient Safety & Experience:

Path to Policy Reform

  1. Define a Minimal Core Dataset
  2. Sunset Legacy Templates
  3. Outcome-Tied Incentives
  4. Leverage FHIR & SMART on FHIR
  5. Clinician Governance & Continuous Audit

Conclusion

Dr. Asghar’s 60-page revelation isn’t an outlier—it’s the new normal in U.S. healthcare. And while regulatory frameworks aimed to protect patients and encourage technology adoption, they’ve unintentionally inflated the medical record into a compliance battleground. Pruning this bloat requires coordinated policy reform, modern interoperability, and—above all—a re-centering of the chart on patient care rather than billing codes.


Key Takeaway: Pruning the EMR can reclaim billions, revive clinician morale, and restore the medical record’s original purpose: guiding healing.

Epic UGM 2025: The Announcements That Signal the Next Era in Health IT

If you work in Healthcare, you’ve probably seen your feed flooded with hot takes, reactions, and side notes from Epic’s annual UGM conference.

For anyone working in health IT, this gathering in Verona isn’t just another corporate event—it’s where the roadmap for a significant portion of global healthcare software gets revealed. Epic has reach, and every year at UGM, they flex it.

The setting itself continues to spark debate. Verona isn’t exactly a conference city, and the early-morning drives to Chicago airports are the stuff of conference legend (and, frankly, concern). But that friction also adds to the mythology of UGM—rain-drenched Madison nights, cheese curds with Epic folks, and a city taken over by health IT people.

The Tone-Setters: Judy and Sumit

Judy Faulkner, now 82, still takes the stage with an energy that defines Epic’s culture. Her three guiding statements—help clinicians love their jobs, help organizations stay financially strong, help patients be healthy—were presented not as a hierarchy but as co-equal missions. The patient-centered slide repeated throughout reinforced that the end game remains: keep the patient at the core.

Sumit Rana followed, not coincidentally. Many see him as Judy’s eventual successor, and his framing of Epic’s innovation strategy was telling:

  • Eliminate (remove work that adds no value—prior auth was the example)
  • Automate (make the necessary work invisible)
  • Augment (make humans better at what they do)
  • Transform (change how the work itself is defined)

He boiled it down further: software that assists → software that carries out tasks. That’s where Epic sees AI’s trajectory.

AI Medical Scribe: The ‘Non-Event’ Announcement

The buzz coming into UGM was about Epic building its own ambient AI assistant to compete with the DAX Copilots and Abridge-type players. Judy defused the drama by flatly stating: yes, Epic is building native AI charting with Microsoft, targeting early 2026 rollout. Microsoft provides Dragon Ambient AI components; Epic stitches it into visit-ready notes. Other vendors remain in play, but the gauntlet is thrown. The future fight is whether you go Epic-native or stick with specialized vendors.

Cosmos AI (Comet): Epic Builds Its Own LLM

The real moonshot was Cosmos AI—dubbed Comet. Epic has fed 8 billion encounters into its own generative model (136B tokens, 1B parameters). Early results outperform many purpose-built ML models. The promise: a single model that can flex across tasks—risk prediction, decision support, guardrails for AI outputs—rather than a patchwork of niche models.

Cosmos already spans 300M patients and 16B encounters. Comet adds generative capabilities. The play is clear: control your data, control your model, control the guardrails.

A Cosmos AI Lab will open to researchers. Expect Epic to leverage its network effects—Cosmos contributors will get first access to these tools, reinforcing the incentive loop.

The AI Portfolio Expands: Art, Emmie, Penny

Epic formally introduced three branded AI assistants:

  • Art (clinician assistant)
  • Emmie (patient-facing assistant)
  • Penny (RCM assistant)

Each with staged rollouts:

  • Art: AI summaries, digital colleague, real-time auth, Cosmos-informed workflows (2026+).
  • Emmie: outreach, screening reminders, SMS scheduling, future voice agent (2026+).
  • Penny: denial appeals, autonomous coding (ED, radiology first), automated claims follow-up (2026+).

Judy prefers the term “Healthcare Intelligence.” Whether the rebrand sticks or not, the intent is clear: broaden AI’s identity beyond hype, into infrastructure.

The New UI: Epic’s Facelift

Slated for Nov 2026, Epic’s UI overhaul integrates AI deeply into clinician workflows. Patient plans, AI record queries, Cosmos insights—all surfaced in redesigned screens. Nurses get parallel updates. For patients, MyChart evolves with a digital concierge (Nov 2025) and preventative care to-do lists (Feb 2026). Expect Emmie to be the quiet transformer of the patient experience.

EpicOps ERP and Clinical Trials: Building the Vertical Stack

Epic is going after ERP—rebranded as EpicOps. Workforce management, supply chain, and financials, all integrated natively. First modules by 2027. This move threatens existing ERP vendors in healthcare and tightens Epic’s grip on operational workflows.

Clinical Trials management is also in play: an end-to-end system (launching with early adopters in Nov 2026). Add in blood bank, cell/gene therapy, fetal monitoring, occupational health—Epic is filling white space aggressively.

MyChart Central: One Login to Rule Them All

Multi-institution patients rejoice: MyChart Central will unify logins across organizations, live in Madison now, rolling out in Nov 2025. It’s patient-first, but politically complex (org autonomy vs network utility). Epic is betting patients will force alignment.

Training, Finances, and Everything Else

Epic spotlighted training as a lever for EHR satisfaction (Arch Collaborative data). Specialty-specific onboarding, Thrive and SmartUser courses, and “What’s New” features aim to shrink learning curves. Expect efficiency gains pitched as ROI.

On finances: Penny leads the AI charge, but new cost reduction and executive dashboards (Pulse, Exec) anchor the narrative that Epic cares about hospital solvency.

Expansion continues: Northern Ireland, Singapore, multiple provinces in Canada, and U.S. state-sponsored rural implementations. The tiered offering—Garden Plot, Orchard, potential Flower Plot—brings Epic down-market.

The Smaller Nuggets

  • AI pricing: pay-as-you-go vs “AI Suite” unlimited.
  • Organ donation via MyChart, powered by Donate Life.
  • Outbreak detection using Cosmos data.
  • Health Grid integrations across payers, devices, diagnostics, specialty societies.
  • Operational Services: Epic as consultant on LOS, access.
  • Judy’s annual “random” request: brighten adult hospitals like children’s hospitals.

Judy ended with: “We predict the future so we can prepare for it and so we can change it.” That line sums up Epic’s posture: they don’t just want to forecast; they want to author the script.

Epic UGM 2025 made one thing abundantly clear: Epic isn’t playing defense against the AI startups, ERP incumbents, or patient app challengers. They’re expanding the surface area of their ecosystem, using their network scale and data gravity as the ultimate moat.

The next 24 months will decide if healthcare embraces Epic’s “Healthcare Intelligence” era—or if the market fractures around specialized alternatives.

Talk to Your Data: Looker’s AI Agents

In today’s data-driven world, the ability to turn raw data into actionable insights is the competitive edge that modern organizations crave. Yet for too many teams, the gap between complex datasets and business-savvy questions remains wide. Looker, Google Cloud’s flagship BI platform, has already bridged much of that gap with its powerful semantic modeling layer (LookML), enabling governed, scalable analytics. Now, with the launch of Conversational Analytics Data Agents, Looker is taking a quantum leap forward—bringing AI-powered natural language querying directly to enterprise data.

Why Data Agents Matter Traditional dashboards and SQL editors assume that business users either know the underlying schema or have analysts on speed dial. Data Agents flip this model on its head by letting stakeholders interact in plain English (or their language of choice). Behind the scenes, these agents:

  • Translate business terms (e.g., “loyal customers”) into precise filters (e.g., orders > 5 in the last 12 months).
  • Enforce governance by leveraging Looker’s semantic layer, ensuring every answer is built on trusted definitions.
  • Embed best practices like default date ranges or grouping fields, so every query aligns with organizational standards.

A Real-World Scenario Imagine a sales manager asking, “Show me our top five products by revenue in Q2 for the Northeast region—excluding returns.” In seconds, the Data Agent applies the right filters (Order Date, Region, Return Status), aggregates by product category, and delivers a clear chart. No LookML edits, no manual joins, no back-and-forth with the analytics team.

Benefits Across the Organization

  • Analysts reclaim hours previously spent answering ad-hoc requests, focusing instead on deeper analyses and modeling.
  • Business leaders gain autonomy, running their own what-if scenarios without waiting in ticket queues.
  • Data teams maintain strict governance, as every conversational query respects the semantic definitions and access controls baked into Looker.

Looking Ahead As AI continues to redefine how we work with data, Looker’s Data Agents represent a critical milestone on the path to truly self-service analytics. By combining Google’s advanced language models (Gemini) with Looker’s proven metadata layer, organisations can scale insights more securely and intuitively than ever before.

If you’re evaluating your BI roadmap for 2025 and beyond, consider how Conversational Analytics Data Agents can democratize data across your enterprise—empowering everyone, from executives to front-line staff, to ask questions and drive decisions with confidence.

The Prompt That’s Killing the Marketing Department: How AI Is Reshaping Go-to-Market Strategy

Marketing, as we know it, is transforming at breakneck speed. And it’s not just creative teams adapting to new channels or consumers demanding snappy content. It’s the quiet revolution of prompt engineering that’s threatening to blow up traditional marketing departments entirely.

Recently, Alex Hughes (Head of Growth at Droxy AI) posted a viral thread demonstrating just how powerful today’s large language models have become at automating marketing strategy. His argument was blunt:Instead of hiring for each function, you can orchestrate the entire go-to-market plan in minutes—if you know how to ask.


The Big Idea: Structured, Role-Based Prompting

Most people think of AI as a copywriting assistant. That’s missing the point.

With the right prompt design, you can make your AI tool step into any marketing role.

Instead of “Write me an ad,” you can ask:

“Act as my Ad Creative Director. Build concepts, headlines, body copy variations, and emotional hooks for our target audience.”

By framing the role and the task clearly, you don’t just get text—you get strategy.


The Prompt Playbook for a Modern Marketing Team

Here’s how you can think about prompting your AI for each key marketing function.

Head of Content Prompt

Purpose: Define content strategy, content pillars, topics, and distribution.

Prompt Structure:

<Task>
Act as the Head of Content for our new product launch. Define the content strategy, including content pillars, topic ideas, and distribution channels.
</Task>

<Inputs>
<product>{Description}</product>
<target_audience>{Audience}</target_audience>
<goal>{e.g., Lead generation}</goal>
<tone>{e.g., Bold, friendly}</tone>
</Inputs> 

Ad Creative Director Prompt

Purpose: Create ad concepts with compelling messaging and emotional hooks.

Prompt Structure:

<Task>
Act as the Ad Creative Director. Create ad concepts, headlines, body copy variations, and calls-to-action for social media, search, and display ads. Emphasize benefits and emotional hooks.
</Task>

<Inputs>
<product>{Description}</product>
<target_audience>{Audience}</target_audience>
<goal>{e.g., Sign-ups}</goal>
<tone>{e.g., Witty, bold}</tone>
</Inputs> 

SEO Strategist Prompt

Purpose: Conduct keyword research, define topic clusters, suggest on-page SEO improvements.

Prompt Structure:

<Task>
Act as an SEO Strategist. Conduct keyword research, propose an SEO content cluster strategy, suggest on-page optimizations, and outline blog titles targeting our audience’s search intent.
</Task>

<Inputs>
<product>{Description}</product>
<target_audience>{Audience}</target_audience>
<goal>{e.g., Organic traffic}</goal>
<tone>{e.g., Authoritative, approachable}</tone>
</Inputs> 

Brand Strategist Prompt

Purpose: Shape positioning, messaging, and storytelling themes.

Prompt Structure:

<Task>
Act as a Brand Strategist. Develop brand positioning, value proposition, tone of voice guidelines, messaging pillars, and storytelling themes that resonate with the target audience.
</Task>

<Inputs>
<product>{Description}</product>
<target_audience>{Audience}</target_audience>
<goal>{e.g., Build trust}</goal>
<tone>{e.g., Confident, friendly}</tone>
</Inputs> 

All-in-One Mega Prompt

Purpose: Orchestrate the entire marketing strategy in one go.

Prompt Structure:

<Task>
Act as a full-stack AI marketing strategist for a startup preparing to launch a new product or service. You will handle market research, positioning, messaging, content creation, email copywriting, and SEO ideation.
</Task>

<Inputs>
<product>{Describe your product or service here}</product>
<target_audience>{Who is the product for? (demographics, psychographics, industry, etc.)}</target_audience>
<goal>{e.g. “generate leads,” “build awareness,” “launch product,” etc.}</goal>
<tone>{e.g. “casual and fun,” “bold and punchy,” “professional and clear”}</tone>
</Inputs>

From Prompt Framework to Automation Tool

Now—imagine you systematize this.

What if instead of marketers needing to manually type these prompts each time, you built a workflow like:

✅ Fill out a simple form (product description, audience, goals, tone)

✅ Choose the “role” or “bundle of roles”

✅ Automatically generate structured strategy documents

This isn’t theoretical. You could implement it:Play

Internal Use: For Agencies or Teams

  • Build an internal tool (think n8n, Zapier, Make) that routes prompt templates to an LLM via API.
  • Auto-save outputs into Notion, Airtable, Google Docs, or your CMS.
  • Review and refine as a team before client delivery.
  • Result: Faster strategy development, lower cost per plan, better client consistency.
Article content

💡 Client-Facing SaaS Product

  • Create a self-service portal for SMB clients.
  • Clients choose their marketing goal.
  • Fill in their product and audience.
  • Click “Generate” → Get a customized marketing plan.
  • Upsell human review or customization tiers.
  • Result: A new scalable revenue stream.

The Future: Marketing Teams as AI Orchestrators

Here’s the key insight:

AI doesn’t eliminate marketing teams. It eliminates friction. It turns thinking into doing in seconds.

The winning marketing teams won’t be those who resist automation. They’ll be the ones who master it—shifting their time from grinding out drafts to refining strategy, creativity, and differentiation.

It’s not about “firing marketers.” It’s about making them 10x marketers.


If you’re a founder, marketer, or agency leader wondering how to do more with less—this is your blueprint.

Future of Product Discovery: A Workflow for Modern Product Builders

In today’s product world, speed is king — but so is clarity. We’ve entered an era where AI-assisted “vibe coding” (building quick prototypes without over-planning) can give product teams superpowers. But without a disciplined framework, it’s just chaos dressed up as creativity.

This is where a structured product discovery workflow that integrates vibe coding — without letting it run the show — becomes a game-changer.


Why This Matters Now

We’re in the middle of a massive shift. AI and low-code tools have made it easier than ever to jump into building. But the temptation to skip validation, research, and proper documentation is dangerous — especially for enterprise-grade products.

The winners will be those who can combine speed and experimentation with rigor and documentation.


A New Workflow for the Modern Builder

1️⃣ Start with Clarity — Capture Your Thinking

Before writing a single line of code, write a one-pager. This isn’t a PRD; it’s a thinking document. Your one-pager should:

  • Summarize the problem you’re solving
  • Define who you’re solving it for
  • List the success metrics
  • Capture early hypotheses and open questions

Think of it as your map before you start exploring.


2️⃣ Validate the Problem — Data Before Code

Don’t vibe code until you’ve interrogated the data.

  • Look at user behavior analytics
  • Interview users
  • Analyze competitive products

Your goal: Prove that the problem is real before you start building solutions that might not matter.


3️⃣ Prototype to Align on Vision — Vibe Code (If Needed)

Once you know the problem is worth solving, build a quick prototype. This isn’t about pixel perfection — it’s about communication and alignment.

Here’s the rule:

Only vibe code when it helps communicate the vision or test an idea you can’t fully explain with words or sketches.


4️⃣ Involve Design & Engineering Early

This is where many workflows break down. Don’t throw your vibe-coded prototype over the fence — invite design and engineering into the room early. Let them poke holes in your assumptions, suggest better patterns, and raise potential tech constraints before you’ve locked in the direction.


5️⃣ Test with Users — Validate Assumptions

Your prototype is now a conversation starter.

  • Put it in front of real users.
  • Ask them to complete real tasks.
  • Watch where they struggle.

The goal: Learn before you build production code.


6️⃣ Write the PRD (Later in the Process)

Here’s where traditional product management wisdom flips. Instead of starting with a long PRD, you now write it after the prototype has been tested and refined.

Your PRD should be lighter — but it still matters.


Will the PRD Disappear? No — Here’s Why

Despite the AI hype and faster workflows, the PRD is here to stay. Here’s why:

✔️ Enterprise-grade software environments require even simple changes to be documented, covering key flows, edge cases, error handling, and permissions.

✔️ Products without a user interface will always need a spec to clarify behavior.

✔️ A single source of truth empowers teams to make aligned decisions.

✔️ Post-launch, you’ll need a reference to know what was built and why.

✔️ Even AI systems will need to reference your product’s intent and constraints.

The PRD doesn’t die, neither do they lie — it just shows up later, and probably with fewer words.


The Takeaway

Vibe coding is not a replacement for product discipline — it’s a tool that makes disciplined workflows faster.

The future belongs to product teams that:

  • Think clearly before building
  • Validate problems before committing resources
  • Prototype with purpose
  • Involve the whole team early
  • Document enough to keep everyone aligned — but not a word more

Speed matters. But in product, clarity wins.