The app for independent voices

Anthropic recently announced that because its latest model, Mythos, has become “too powerful” at cybersecurity work and software vulnerability discovery—reaching a level the company finds unsettling—it has taken the unusually restrained step of not releasing the model publicly. Access is limited to a handful of critical infrastructure companies within Project Glasswing, and ordinary developers cannot reach it through the API either. (Some analysts have pointed out, of course, that this arrangement also conveniently helps prevent model distillation and locks in enterprise-tier customers.) But even with this “beast” kept on a leash for the moment, the coding capabilities of today’s mainstream AI models are already more than enough to make cloning a product trivially easy.

Last week, a developer on Reddit claimed that he had spent a year “reverse-engineering the SwiftUI API” to build an entirely new Swift web framework. The post was fluent and precise in its terminology, and it drew considerable attention. Paul Hudson soon appeared in the comments and called it out: the so-called “independent research” was in fact little more than a string replacement performed on his MIT-licensed open-source project Ignite to the point that the original author’s personal, stylistically distinctive code comments had been preserved verbatim. The entire repository was then squashed into a single commit to erase its history, and the license was illicitly changed to the copyleft GPL. A number of developers in the community suspect that the “reverse-engineering SwiftUI” narrative itself was AI-generated as well. More intriguingly, the author in question was actually a major contributor to Ignite himself—when Vibe Coding has driven the cost of “repackaging a project” close to zero, “I was involved in this” can itself become a rhetorical device for blurring the lines of responsibility.

Around the same time, Vibe Island—a polished macOS menu-bar app for monitoring AI coding agents—was pixel-for-pixel cloned shortly after its release. Although the copycat published its code under the banner of an “open-source alternative,” the impact on the original author’s sales and creative motivation was real and significant. Yet even if the author wished to pursue legal action, he would run into a new problem of the times: in both establishing ownership and enforcing his rights, he might need to prove that his work possesses sufficient human originality and account for the extent of AI-generated content involved—otherwise, he would face considerably greater legal uncertainty.

… 👇

Fatbobman's Swift Weekly #131
Apr 13
at
11:12 PM
Relevant people

Log in or sign up

Join the most interesting and insightful discussions.