one-eyed among the... one-eyed
331 stories
·
6 followers

AI Changes Everything

1 Share

At the moment I'm working on a new project. Even over the last two months, the way I do this has changed profoundly. Where I used to spend most of my time in Cursor, I now mostly use Claude Code, almost entirely hands-off.

Do I program any faster? Not really. But it feels like I've gained 30% more time in my day because the machine is doing the work. I alternate between giving it instructions, reading a book, and reviewing the changes. If you would have told me even just six months ago that I'd prefer being an engineering lead to a virtual programmer intern over hitting the keys myself, I would not have believed it. I can go can make a coffee, and progress still happens. I can be at the playground with my youngest while work continues in the background. Even as I'm writing this blog post, Claude is doing some refactorings.

While all this is happening, I've found myself reflecting a lot on what AI means to the world and I am becoming increasingly optimistic about our future. It's obvious now that we're undergoing a tremendous shift. AI is out of the bottle, and there's no putting it back. Even if we halted all progress today, froze the weights, halted the training, the systems already out there would still reshape how we live, work, learn, and communicate to one another.

What however took longer to accept is just how profound that change really is. As an engineer coming from a world of deterministic things, who deeply values the craft of engineering, to accept the messiness of what agents are doing took a while to digest. It took me a while to even warm up to tool usage by AI in the first place — just two years ago I was convinced AI might kill my wife. In those two years however we've come incredibly far. We have reached the point where even if we stopped here (and there is no indication we will) AI is already a new substrate for a lot of new innovation, ideas and creations and I'm here for it. It has moved beyond being a curious tool.

Never before have I seen a technology surface in every day life so quickly, so widely. Smartphones adoption felt slow in comparison. Today I can't finish a commute or coffee without spotting someone chatting with ChatGPT. I've had conversations with baristas, hairdressers, parents at the playground — people who aren't what I would consider “tech-savvy” — telling me how AI changed the way they write letters, search for recipes, help their kids with homework, or translate documents. The ripple effect is already massive. And still, the majority of the world hasn't touched these tools yet. Entire communities, professions, and economies are yet to begin exploring their transformation.

That's what makes this moment feel so strange — half revolutionary, half prelude. And yet, oddly, there are so many technologists who are holdouts. How could techies reject this change? Thomas Ptacek's piece “My AI Skeptic Friends Are All Nuts” really resonated with me. It takes a humorous stab at the push against AI that is taking place from my very circles. Why is it that so many people I've respected in tech for years — engineers, open source contributors — are the ones most resistant to what's happening? We've built something beyond what we imagined, and instead of being curious, many are dismissive and denying its capabilities. What is that?

Of course the implications are vast and real and the rapid development forces big questions. What does this mean for the education of our children? If AI can teach, explain, and personalize lessons better than a classroom of thirty ever could, what becomes of schools as we know them? And if kids grow up expecting to interact with intelligence — not just absorb content — how do we teach them to reason, create, and collaborate in ways that leverage this power without becoming dependent on it?

On the global stage, there are also ramifications that seem more fundamental than in previous cycles. It does not feel like the rise of search engines or social media, where the rest of the world was satisfied with being a consumer of US infrastructure. This feels more like the invention of the steam engine. Once it existed, there was no future without it. No country could afford to stay on the sidelines. But steam machines also became quickly commoditized and there was plenty of competition of manufacturers. It was just too obvious of a technological leap. With AI, every nation, every corporation will want its own models, its own control, its own stake in the future.

And so, as I alternate between delegating tasks to Claude and reading something thoughtful in between, I can't help but feel excited about being there when we're at the beginning of something irreversible and expansive.

I understand why it's tempting to be cynical or fearful. For sure the job of programmers and artists will change, but they won't vanish. I feel like all my skills that I learned as a programmer are more relevant than ever, just with a new kind of tool. Likewise the abundance of AI generated art also makes me so much more determined that I want to hire an excellent designer as soon as I can. People will always value well crafted products. AI might raise the bar for everybody all at once, but it's the act of careful deliberation and very intentional creation that sets you apart from everybody else.

Sure, I may have good personal reasons to be optimistic. But the more time I spend with these tools, the more I believe that optimism is the more reasonable stance for everyone. AI can dramatically increase human agency when used well. It can help us communicate across cultures. It can democratize access to knowledge. It can accelerate innovation in medicine, science, and engineering.

Right now it's messy and raw, but the path is clear: we are no longer just using machines, we are now working with them. And while it's early, I think we'll look back at this decade the way past generations looked at electricity or the printing press — not as a curiosity, but as the moment when everything changed.

I encourage you not meet that moment with cynicism or fear: meet it with curiosity, responsibility and the conviction that this future will be bright and worth embracing.

Read the whole story
kerray
2 days ago
reply
Brno, CZ
Share this story
Delete

My AI Skeptic Friends Are All Nuts

2 Shares

A heartfelt provocation about AI-assisted programming.

Tech execs are mandating LLM adoption. That’s bad strategy. But I get where they’re coming from.

Some of the smartest people I know share a bone-deep belief that AI is a fad — the next iteration of NFT mania. I’ve been reluctant to push back on them, because, well, they’re smarter than me. But their arguments are unserious, and worth confronting. Extraordinarily talented people are doing work that LLMs already do better, out of spite.

All progress on LLMs could halt today, and LLMs would remain the 2nd most important thing to happen over the course of my career.

Important caveat: I’m discussing only the implications of LLMs for software development. For art, music, and writing? I got nothing. I’m inclined to believe the skeptics in those fields. I just don’t believe them about mine.

Bona fides: I’ve been shipping software since the mid-1990s. I started out in boxed, shrink-wrap C code. Survived an ill-advised Alexandrescu C++ phase. Lots of Ruby and Python tooling. Some kernel work. A whole lot of server-side C, Go, and Rust. However you define “serious developer”, I qualify. Even if only on one of your lower tiers.

level setting

† (or, God forbid, 2 years ago with Copilot)

First, we need to get on the same page. If you were trying and failing to use an LLM for code 6 months ago †, you’re not doing what most serious LLM-assisted coders are doing.

People coding with LLMs today use agents. Agents get to poke around your codebase on their own. They author files directly. They run tools. They compile code, run tests, and iterate on the results. They also:

  • pull in arbitrary code from the tree, or from other trees online, into their context windows,
  • run standard Unix tools to navigate the tree and extract information,
  • interact with Git,
  • run existing tooling, like linters, formatters, and model checkers, and
  • make essentially arbitrary tool calls (that you set up) through MCP.

The code in an agent that actually “does stuff” with code is not, itself, AI. This should reassure you. It’s surprisingly simple systems code, wired to ground truth about programming in the same way a Makefile is. You could write an effective coding agent in a weekend. Its strengths would have more to do with how you think about and structure builds and linting and test harnesses than with how advanced o3 or Sonnet have become.

If you’re making requests on a ChatGPT page and then pasting the resulting (broken) code into your editor, you’re not doing what the AI boosters are doing. No wonder you’re talking past each other.

the positive case

four quadrants of tedium and importance

LLMs can write a large fraction of all the tedious code you’ll ever need to write. And most code on most projects is tedious. LLMs drastically reduce the number of things you’ll ever need to Google. They look things up themselves. Most importantly, they don’t get tired; they’re immune to inertia.

Think of anything you wanted to build but didn’t. You tried to home in on some first steps. If you’d been in the limerent phase of a new programming language, you’d have started writing. But you weren’t, so you put it off, for a day, a year, or your whole career.

I can feel my blood pressure rising thinking of all the bookkeeping and Googling and dependency drama of a new project. An LLM can be instructed to just figure all that shit out. Often, it will drop you precisely at that golden moment where shit almost works, and development means tweaking code and immediately seeing things work better. That dopamine hit is why I code.

There’s a downside. Sometimes, gnarly stuff needs doing. But you don’t wanna do it. So you refactor unit tests, soothing yourself with the lie that you’re doing real work. But an LLM can be told to go refactor all your unit tests. An agent can occupy itself for hours putzing with your tests in a VM and come back later with a PR. If you listen to me, you’ll know that. You’ll feel worse yak-shaving. You’ll end up doing… real work.

but you have no idea what the code is

Are you a vibe coding Youtuber? Can you not read code? If so: astute point. Otherwise: what the fuck is wrong with you?

You’ve always been responsible for what you merge to main. You were five years go. And you are tomorrow, whether or not you use an LLM.

If you build something with an LLM that people will depend on, read the code. In fact, you’ll probably do more than that. You’ll spend 5-10 minutes knocking it back into your own style. LLMs are showing signs of adapting to local idiom, but we’re not there yet.

People complain about LLM-generated code being “probabilistic”. No it isn’t. It’s code. It’s not Yacc output. It’s knowable. The LLM might be stochastic. But the LLM doesn’t matter. What matters is whether you can make sense of the result, and whether your guardrails hold.

Reading other people’s code is part of the job. If you can’t metabolize the boring, repetitive code an LLM generates: skills issue! How are you handling the chaos human developers turn out on a deadline?

† (because it can hold 50-70kloc in its context window)

For the last month or so, Gemini 2.5 has been my go-to †. Almost nothing it spits out for me merges without edits. I’m sure there’s a skill to getting a SOTA model to one-shot a feature-plus-merge! But I don’t care. I like moving the code around and chuckling to myself while I delete all the stupid comments. I have to read the code line-by-line anyways.

but hallucination

If hallucination matters to you, your programming language has let you down.

Agents lint. They compile and run tests. If their LLM invents a new function signature, the agent sees the error. They feed it back to the LLM, which says “oh, right, I totally made that up” and then tries again.

You’ll only notice this happening if you watch the chain of thought log your agent generates. Don’t. This is why I like Zed’s agent mode: it begs you to tab away and let it work, and pings you with a desktop notification when it’s done.

I’m sure there are still environments where hallucination matters. But “hallucination” is the first thing developers bring up when someone suggests using LLMs, despite it being (more or less) a solved problem.

but the code is shitty, like that of a junior developer

Does an intern cost $20/month? Because that’s what Cursor.ai costs.

Part of being a senior developer is making less-able coders productive, be they fleshly or algebraic. Using agents well is both a both a skill and an engineering project all its own, of prompts, indices, and (especially) tooling. LLMs only produce shitty code if you let them.

† (Also: 100% of all the Bash code you should author ever again)

Maybe the current confusion is about who’s doing what work. Today, LLMs do a lot of typing, Googling, test cases †, and edit-compile-test-debug cycles. But even the most Claude-poisoned serious developers in the world still own curation, judgement, guidance, and direction.

Also: let’s stop kidding ourselves about how good our human first cuts really are.

but it’s bad at rust

It’s hard to get a good toolchain for Brainfuck, too. Life’s tough in the aluminum siding business.

† (and they surely will; the Rust community takes tooling seriously)

A lot of LLM skepticism probably isn’t really about LLMs. It’s projection. People say “LLMs can’t code” when what they really mean is “LLMs can’t write Rust”. Fair enough! But people select languages in part based on how well LLMs work with them, so Rust people should get on that †.

I work mostly in Go. I’m confident the designers of the Go programming language didn’t set out to produce the most LLM-legible language in the industry. They succeeded nonetheless Go has just enough type safety, an extensive standard library, and a culture that prizes (often repetitive) idiom. LLMs kick ass generating it.

All this is to say: I write some Rust. I like it fine. If LLMs and Rust aren’t working for you, I feel you. But if that’s your whole thing, we’re not having the same argument.

but the craft

Do you like fine Japanese woodworking? All hand tools and sashimono joinery? Me too. Do it on your own time.

† (I’m a piker compared to my woodworking friends)

I have a basic wood shop in my basement †. I could get a lot of satisfaction from building a table. And, if that table is a workbench or a grill table, sure, I’ll build it. But if I need, like, a table? For people to sit at? In my office? I buy a fucking table.

Professional software developers are in the business of solving practical problems for people with code. We are not, in our day jobs, artisans. Steve Jobs was wrong: we do not need to carve the unseen feet in the sculpture. Nobody cares if the logic board traces are pleasingly routed. If anything we build endures, it won’t be because the codebase was beautiful.

Besides, that’s not really what happens. If you’re taking time carefully golfing functions down into graceful, fluent, minimal functional expressions, alarm bells should ring. You’re yak-shaving. The real work has depleted your focus. You’re not building: you’re self-soothing.

Which, wait for it, is something LLMs are good for. They devour schlep, and clear a path to the important stuff, where your judgement and values really matter.

but the mediocrity

As a mid-late career coder, I’ve come to appreciate mediocrity. You should be so lucky as to have it flowing almost effortlessly from a tap.

We all write mediocre code. Mediocre code: often fine. Not all code is equally important. Some code should be mediocre. Maximum effort on a random unit test? You’re doing something wrong. Your team lead should correct you.

Developers all love to preen about code. They worry LLMs lower the “ceiling” for quality. Maybe. But they also raise the “floor”.

Gemini’s floor is higher than my own. My code looks nice. But it’s not as thorough. LLM code is repetitive. But mine includes dumb contortions where I got too clever trying to DRY things up.

And LLMs aren’t mediocre on every axis. They almost certainly have a bigger bag of algorithmic tricks than you do: radix tries, topological sorts, graph reductions, and LDPC codes. Humans romanticize rsync (Andrew Tridgell wrote a paper about it!). To an LLM it might not be that much more interesting than a SQL join.

But I’m getting ahead of myself. It doesn’t matter. If truly mediocre code is all we ever get from LLM, that’s still huge. It’s that much less mediocre code humans have to write.

but it’ll never be AGI

I don’t give a shit.

Smart practitioners get wound up by the AI/VC hype cycle. I can’t blame them. But it’s not an argument. Things either work or they don’t, no matter what Jensen Huang has to say about it.

but they take-rr jerbs

So does open source. We used to pay good money for databases.

We’re a field premised on automating other people’s jobs away. “Productivity gains,” say the economists. You get what that means, right? Fewer people doing the same stuff. Talked to a travel agent lately? Or a floor broker? Or a record store clerk? Or a darkroom tech?

When this argument comes up, libertarian-leaning VCs start the chant: lamplighters, creative destruction, new kinds of work. Maybe. But I’m not hypnotized. I have no fucking clue whether we’re going to be better off after LLMs. Things could get a lot worse for us.

LLMs really might displace many software developers. That’s not a high horse we get to ride. Our jobs are just as much in tech’s line of fire as everybody else’s have been for the last 3 decades. We’re not East Coast dockworkers; we won’t stop progress on our own.

but the plagiarism

Artificial intelligence is profoundly — and probably unfairly — threatening to visual artists in ways that might be hard to appreciate if you don’t work in the arts.

We imagine artists spending their working hours pushing the limits of expression. But the median artist isn’t producing gallery pieces. They produce on brief: turning out competent illustrations and compositions for magazine covers, museum displays, motion graphics, and game assets.

LLMs easily — alarmingly — clear industry quality bars . Gallingly, one of the things they’re best at is churning out just-good-enough facsimiles of human creative work. I have family in visual arts. I can’t talk to them about LLMs. I don’t blame them. They’re probably not wrong.

Meanwhile, software developers spot code fragments seemingly lifted from public repositories on Github and lose their shit. What about the licensing? If you’re a lawyer, I defer. But if you’re a software developer playing this card? Cut me a little slack as I ask you to shove this concern up your ass. No profession has demonstrated more contempt for intellectual property.

The median dev thinks Star Wars and Daft Punk are a public commons. The great cultural project of developers has been opposing any protection that might inconvenience a monetizable media-sharing site. When they fail at policy, they route around it with coercion. They stand up global-scale piracy networks and sneer at anybody who so much as tries to preserve a new-release window for a TV show.

Call any of this out if you want to watch a TED talk about how hard it is to stream The Expanse on LibreWolf. Yeah, we get it. You don’t believe in IPR. Then shut the fuck up about IPR. Reap the whirlwind.

It’s all special pleading anyways. LLMs digest code further than you do. If you don’t believe a typeface designer can stake a moral claim on the terminals and counters of a letterform, you sure as hell can’t be possessive about a red-black tree.

positive case redux

When I started writing a couple days ago, I wrote a section to “level set” to the state of the art of LLM-assisted programming. A bluefish filet has a longer shelf life than an LLM take. In the time it took you to read this, everything changed.

Kids today don’t just use agents; they use asynchronous agents. They wake up, free-associate 13 different things for their LLMs to work on, make coffee, fill out a TPS report, drive to the Mars Cheese Castle, and then check their notifications. They’ve got 13 PRs to review. Three get tossed and re-prompted. Five of them get the same feedback a junior dev gets. And five get merged.

“I’m sipping rocket fuel right now,” a friend tells me. “The folks on my team who aren’t embracing AI? It’s like they’re standing still.” He’s not bullshitting me. He doesn’t work in SFBA. He’s got no reason to lie.

There’s plenty of things I can’t trust an LLM with. No LLM has any of access to prod here. But I’ve been first responder on an incident and fed 4o — not o4-mini, 4o — log transcripts, and watched it in seconds spot LVM metadata corruption issues on a host we’ve been complaining about for months. Am I better than an LLM agent at interrogating OpenSearch logs and Honeycomb traces? No. No, I am not.

To the consternation of many of my friends, I’m not a radical or a futurist. I’m a statist. I believe in the haphazard perseverance of complex systems, of institutions, of reversions to the mean. I write Go and Python code. I’m not a Kool-aid drinker.

But something real is happening. My smartest friends are blowing it off. Maybe I persuade you. Probably I don’t. But we need to be done making space for bad arguments.

but i’m tired of hearing about it

And here I rejoin your company. I read Simon Willison, and that’s all I really need. But all day, every day, a sizable chunk of the front page of HN is allocated to LLMs: incremental model updates, startups doing things with LLMs, LLM tutorials, screeds against LLMs. It’s annoying!

But AI is also incredibly — a word I use advisedly — important. It’s getting the same kind of attention that smart phones got in 2008, and not as much as the Internet got. That seems about right.

I think this is going to get clearer over the next year. The cool kid haughtiness about “stochastic parrots” and “vibe coding” can’t survive much more contact with reality. I’m snarking about these people, but I meant what I said: they’re smarter than me. And when they get over this affectation, they’re going to make coding agents profoundly more effective than they are today.

Read the whole story
kerray
2 days ago
reply
Brno, CZ
Share this story
Delete

Fossil fuels are dead (and here's why)

1 Comment and 6 Shares

So, I'm going to talk about Elon Musk again, everybody's least favourite eccentric billionaire asshole and poster child for the Thomas Edison effect—get out in front of a bunch of faceless, hard-working engineers and wave that orchestra conductor's baton, while providing direction. Because I think he may be on course to become a multi-trillionaire—and it has nothing to do with cryptocurrency, NFTs, or colonizing Mars.

This we know: Musk has goals (some of them risible, some of them much more pragmatic), and within the limits of his world-view—I'm pretty sure he grew up reading the same right-wing near-future American SF yarns as me—he's fairly predictable. Reportedly he sat down some time around 2000 and made a list of the challenges facing humanity within his anticipated lifetime: roll out solar power, get cars off gasoline, colonize Mars, it's all there. Emperor of Mars is merely his most-publicized, most outrageous end goal. Everything then feeds into achieving the means to get there. But there are lots of sunk costs to pay for: getting to Mars ain't cheap, and he can't count on a government paying his bills (well, not every time). So each step needs to cover its costs.

What will pay for Starship, the mammoth actually-getting-ready-to-fly vehicle that was originally called the "Mars Colony Transporter"?

Starship is gargantuan. Fully fuelled on the pad it will weigh 5000 tons. In fully reusable mode it can put 100-150 tons of cargo into orbit—significantly more than a Saturn V or an Energiya, previously the largest launchers ever built. In expendable mode it can lift 250 tons, more than half the mass of the ISS, which was assembled over 20 years from a seemingly endless series of launches of 10-20 ton modules.

Seemingly even crazier, the Starship system is designed for one hour flight turnaround times, comparable to a refueling stop for a long-haul airliner. The mechazilla tower designed to catch descending stages in the last moments of flight and re-stack them on the pad is quite without precedent in the space sector, and yet they're prototyping the thing. Why would you even do that? Well,it makes no sense if you're still thinking of this in traditional space launch terms, so let's stop doing that. Instead it seems to me that SpaceX are trying to achieve something unprecedented with Starship. If it works ...

There are no commercial payloads that require a launcher in the 100 ton class, and precious few science missions. Currently the only clear-cut mission is Starship HLS, which NASA are drooling for—a derivative of Starship optimized for transporting cargo and crew to the Moon. (It loses the aerodynamic fins and the heat shield, because it's not coming back to Earth: it gets other modifications to turn it into a Moon truck with a payload in the 100-200 ton range, which is what you need if you're serious about running a Moon base on the scale of McMurdo station.)

Musk has trailed using early Starship flights to lift Starlink clusters—upgrading from the 60 satellites a Falcon 9 can deliver to something over 200 in one shot. But that's a very limited market.

So what could pay for Starship, and furthermore require a launch vehicle on that scale, and demand as many flights as Falcon 9 got from Starlink?

Well, let's look at the way Starlink synergizes with Musk's other businesses. (Bear in mind it's still in the beta-test stage of roll-out.) Obviously cheap wireless internet with low latency everywhere is a desirable goal: people will pay for it. But it's not obvious that enough people can afford a Starlink terminal for themselves. What's paying for Starlink? As Robert X. Cringely points out, Starlink is subsidized by the FCC—cablecos like Comcast can hand Starlink terminals to customers in remote areas in order to meet rural broadband service obligations that enable them to claim huge subsidies from the FCC: in return they get to milk the wallets of their much easier-to-reach urban/suburban customers. This covers the roll-out cost of Starlink, before Musk starts marketing it outside the USA.

So. What kind of vertically integrated business synergy could Musk be planning to exploit to cover the roll-out costs of Starship?

Musk owns Tesla Energy. And I think he's going to turn a profit on Starship by using it to launch Space based solar power satellites. By my back of the envelope calculation, a Starship can put roughly 5-10MW of space-rate photovoltaic cells into orbit in one shot. ROSA—Roll Out Solar Arrays now installed on the ISS are ridiculously light by historic standards, and flexible: they can be rolled up for launch, then unrolled on orbit. Current ROSA panels have a mass of 325kg and three pairs provide 120kW of power to the ISS: 2 tonnes for 120KW suggests that a 100 tonne Starship payload could produce 6MW using current generation panels, and I suspect a lot of that weight is structural overhead. The PV material used in ROSA reportedly weighs a mere 50 grams per square metre, comparable to lightweight laser printer paper, so a payload of pure PV material could have an area of up to 20 million square metres. At 100 watts of usable sunlight per square metre at Earth's orbit, that translates to 2GW. So Starship is definitely getting into the payload ball-park we'd need to make orbital SBSP stations practical. 1970s proposals foundered on the costs of the Space Shuttle, which was billed as offering $300/lb launch costs (a sad and pathetic joke), but Musk is selling Starship as a $2M/launch system, which works out at $20/kg.

So: disruptive launch system meets disruptive power technology, and if Tesla Energy isn't currently brainstorming how to build lightweight space-rated PV sheeting in gigawatt-up quantities I'll eat my hat.

Musk isn't the only person in this business. China is planning a 1 megawatt pilot orbital power station for 2030, increasing capacity to 1GW by 2049. Entirely coincidentally, I'm sure, the giant Long March 9 heavy launcher is due for test flights in 2030: ostensibly to support a Chinese crewed Lunar expedition, but I'm sure if you're going to build SBSP stations in bulk and the USA refuses to cooperate with you in space, having your own Starship clone would be handy.

I suspect if Musk uses Tesla Energy to push SBPS (launched via Starship) he will find a way to use his massive PV capacity to sell carbon offsets to his competitors. (Starship is designed to run on a fuel cycle that uses synthetic fuels—essential for Mars—that can be manufactured from carbon dioxide and water, if you add enough sunlight. Right now it burns fossil methane, but an early demonstration of the capability of SBPS would be using it to generate renewable fuel for its own launch system.)

Globally, we use roughly 18TW of power on a 24x7 basis. SBPS's big promise is that, unlike ground-based solar, the PV panels are in constant sunlight: there's no night when you're far enough out from the planetary surface. So it can provide base load power, just like nuclear or coal, only without the carbon emissions or long-lived waste products.

Assuming a roughly 70% transmission loss from orbit (beaming power by microwave to rectenna farms on Earth is inherently lossy) we would need roughly 60TW of PV panels in space. Which is 60,000 GW of panels, at roughly 1 km^2 per GW. With maximum optimism that looks like somewhere in the range of 3000-60,000 Starship launches, at $2M/flight is $6Bn to $120Bn ... which, over a period of years to decades, is chicken feed compared to the profit to be made by disrupting the 95% of the fossil fuel industry that just burns the stuff for energy. The cost of manufacturing the PV cells is another matter, but again: ground-based solar is already cheaper to install than shoveling coal into existing power stations, and in orbit it produces four times as much electricity per unit area.

Is Musk going to become a trillionaire? I don't know. He may fall flat on his face: he may not pick up the gold brick that his synergized businesses have placed at his feet: any number of other things could go wrong. I find the fact that other groups—notably the Chinese government—are also going this way, albeit much more slowly and timidly than I'm suggesting, is interesting. But even if Musk doesn't go there, someone is going to get SBPS working by 2030-2040, and in 2060 people will be scratching their heads and wondering why we ever bothered burning all that oil. But most likely Musk has noticed that this is a scheme that would make him unearthly shitpiles of money (the global energy sector in 2014 had revenue of $8Tn) and demand the thousands of Starship flights it will take to turn reusable orbital heavy lift into the sort of industry in its own right that it needs to be before you can start talking about building a city on Mars.

Exponentials, as COVID19 has reminded us, have an eerie quality to them. I think a 1MW SBPS by 2030 is highly likely, if not inevitable, given Starship's lift capacity. But we won't have a 1GW SBPS by 2049: we'll blow through that target by 2035, have a 1TW cluster that lights up the night sky by 2040, and by 2050 we may have ended use of non-synthetic fossil fuels.

If this sounds far-fetched, remember that back in 2011, SpaceX was a young upstart launch company. In 2010 they began flying Dragon capsule test articles: in 2011 they started experimenting with soft-landing first stage boosters. In the decade since then, they've grabbed 50% of the planetary launch market, launched the world's largest comsat cluster (still expanding), begun flying astronauts to the ISS for NASA, and demonstrated reliable soft-landing and re-flight of boosters. They're very close to overtaking the Space Shuttle in terms of reusability: no shuttle flew more than 30 times and SpaceX lately announced that their 10 flight target for Falcon 9 was just a goalpost (which they've already passed). If you look at their past decade, then a forward projection gets you more of the same, on a vastly larger scale, as I've described.

Who loses?

Well, there will be light pollution and the ground-based astronomers will be spitting blood. But in a choice between "keep the astronomers happy" and "climate oopsie, we all die", the astronomers lose. Most likely the existence of $20/kg launch systems will facilitate a new era of space-based astronomy: this is the wrong decade to be raising funds to build something like ELT, only bigger.

Read the whole story
kerray
1340 days ago
reply
Brno, CZ
Share this story
Delete
1 public comment
LeMadChef
1372 days ago
reply
Surprising optimism from Charlie here.
Denver, CO

The mRNA vaccine revolution is just beginning

1 Share

NO ONE EXPECTED the first Covid-19 vaccine to be as good as it was. “We were hoping for around 70 per cent, that’s a success,” says Dr Ann Falsey, a professor of medicine at the University of Rochester, New York, who ran a 150-person trial site for the Pfizer-BioNTech vaccine in 2020.

Even Uğur Şahin, the co-founder and CEO of BioNTech, who had shepherded the drug from its earliest stages, had some doubts. All the preliminary laboratory tests looked good; since he saw them in June, he would routinely tell people that “immunologically, this is a near-perfect vaccine.” But that doesn’t always mean it will work against “the beast, the thing out there” in the real world. It wasn’t until November 9, 2020, three months into the final clinical trial, that he finally got the good news. “More than 90 per cent effective,” he says. “I knew this was a game changer. We have a vaccine.”

“We were overjoyed,” Falsey says. “It seemed too good to be true. No respiratory vaccine has ever had that kind of efficacy.”

The arrival of a vaccine before the close of the year was an unexpected turn of events. Early in the pandemic, the conventional wisdom was that, even with all the stops pulled, a vaccine would take at least a year and a half to develop. Talking heads often referenced that the previous fastest-ever vaccine developed, for mumps back in 1967, took four years. Modern vaccines often stretch out past a decade of development. BioNTech – and US-based Moderna, which announced similar results later the same week – shattered that conventional timeline.

Neither company was a household name before the pandemic. In fact, neither had ever had a single drug approved before. But both had long believed that their mRNA technology, which uses simple genetic instructions as a payload, could outpace traditional vaccines, which rely on the often-painstaking assembly of living viruses or their isolated parts. mRNA turned out to be a vanishingly rare thing in the world of science and medicine: a promising and potentially transformative technology that not only survived its first big test, but delivered beyond most people’s wildest expectations.

But its next step could be even bigger. The scope of mRNA vaccines always went beyond any one disease. Like moving from a vacuum tube to a microchip, the technology promises to perform the same task as traditional vaccines, but exponentially faster, and for a fraction of the cost. “You can have an idea in the morning, and a vaccine prototype by evening. The speed is amazing,” says Daniel Anderson, an mRNA therapy researcher at MIT. Before the pandemic, charities including the Bill & Melinda Gates Foundation and the Coalition for Epidemic Preparedness Innovations (CEPI) hoped to turn mRNA on deadly diseases that the pharmaceutical industry has largely ignored, such as dengue or Lassa fever, while industry saw a chance to speed up the quest for long-held scientific dreams: an improved flu shot, or the first effective HIV vaccine.

Amesh Adalja, an expert on emerging diseases at the Johns Hopkins Center for Health Security, in Maryland, says mRNA could “make all these applications we were hoping for, pushing for, become part of everyday life.”

“When they write the history of vaccines, this will probably be a turning point,” he adds.

While the world remains focused on the rollout of Covid-19 vaccines, the race for the next generation of mRNA vaccines – targeted at a variety of other diseases – is already exploding. Moderna and BioNTech each have nine candidates in development or early clinical trials. There are at least six mRNA vaccines against flu in the pipeline, and a similar number against HIV. Nipah, Zika, herpes, dengue, hepatitis and malaria have all been announced. The field sometimes resembles the early stage of a gold rush, as pharma giants snap up promising researchers for huge contracts – Sanofi recently paid $425 million (£307m) to partner with a small American mRNA biotech called Translate Bio, while GSK paid $294 million (£212m) to work with Germany’s CureVac.

...

Read the whole story
kerray
1461 days ago
reply
Brno, CZ
Share this story
Delete

167 Pieces of Life & Work Advice from Kevin Kelly, Founding Editor of Wired Magazine & The Whole Earth Review

1 Share

Image by Christopher Michel, via Wikimedia Commons

I am a big admirer of Kevin Kelly for the same reason I am of Brian Eno—he is constantly thinking. That thirst for knowledge and endless curiosity has always been the backbone to their particular art forms. For Eno it’s music, but for Kelly it’s in his editorship of the Whole Earth Review and then Wired magazine, providing a space for big ideas to reach the widest audience. (He’s also the reason one of my bucket lists is the Nakasendo, after seeing his photo essay on it.)

On his 68th birthday in 2020, Kelly posted on his blog a list of 68 “Unsolicited Bits of Advice.” One bit of advice that frames his thought process and his work is this one:

“I’m positive that in 100 years much of what I take to be true today will be proved to be wrong, maybe even embarrassingly wrong, and I try really hard to identify what it is that I am wrong about today.”

However, the list is more about wisdom from a life well-spent. Many fall into the art of being a curious human among other humans:

  • Everyone is shy. Other people are waiting for you to introduce yourself to them, they are waiting for you to send them an email, they are waiting for you to ask them on a date. Go ahead.
  • The more you are interested in others, the more interesting they find you. To be interesting, be interested.
  • Being able to listen well is a superpower. While listening to someone you love keep asking them “Is there more?”, until there is no more.

And this is probably the hardest piece of advice these days:

  • Learn how to learn from those you disagree with, or even offend you. See if you can find the truth in what they believe.

Other bits of advice have to do with creativity and being an artist:

  • Always demand a deadline. A deadline weeds out the extraneous and the ordinary. It prevents you from trying to make it perfect, so you have to make it different. Different is better.
  • Don’t be the smartest person in the room. Hangout with, and learn from, people smarter than yourself. Even better, find smart people who will disagree with you.
  • To make something good, just do it. To make something great, just re-do it, re-do it, re-do it. The secret to making fine things is in remaking them.
  • Art is in what you leave out.

And some of the more interesting ones are his disagreements with perceived wisdom:

  • Following your bliss is a recipe for paralysis if you don’t know what you are passionate about. A better motto for most youth is “master something, anything”. Through mastery of one thing, you can drift towards extensions of that mastery that bring you more joy, and eventually discover where your bliss is.

One year later, Kelly has returned with 99 more bits of advice. I guess he couldn’t wait til his 99th birthday for it. Some favorites include:

  • If something fails where you thought it would fail, that is not a failure.
  • Being wise means having more questions than answers.
  • I have never met a person I admired who did not read more books than I did.
  • Every person you meet knows an amazing lot about something you know virtually nothing about. Your job is to discover what it is, and it won’t be obvious.

and finally:

  • Don’t let your email inbox become your to-do list.

There is a small shift in Kelly’s 2021 list from his 2020 list, like a little more frustration with the world, a need for more order in the chaos. I wonder what his advice will be in a few more years?

via BoingBoing

Related Content:

Wired Co-Founder Kevin Kelly Gives 36 Lectures on Our Future World: Education, Movies, Robots, Autonomous Cars & More

The Best Magazine Articles Ever, Curated by Kevin Kelly

What Books Could Be Used to Rebuild Civilization?: Lists by Brian Eno, Stewart Brand, Kevin Kelly & Other Forward-Thinking Minds

Ted Mills is a freelance writer on the arts who currently hosts the Notes from the Shed podcast and is the producer of KCRW’s Curious Coast. You can also follow him on Twitter at @tedmills, and/or watch his films here.

167 Pieces of Life & Work Advice from Kevin Kelly, Founding Editor of Wired Magazine & The Whole Earth Review is a post from: Open Culture. Follow us on Facebook and Twitter, or get our Daily Email. And don't miss our big collections of Free Online Courses, Free Online Movies, Free eBooksFree Audio Books, Free Foreign Language Lessons, and MOOCs.

Read the whole story
kerray
1474 days ago
reply
Brno, CZ
Share this story
Delete

Why Bad CEOs Fear Remote Work

2 Shares

Remote work expert David Tate wrote that when fearful CEOs talk about workplace culture, they’re really talking about workplace control. Their insecurities demand that the way work is done by employees is always visible, highly regulated and uses the methods executives prefer, rather than what’s best for everyone’s productivity. Remote work is seen as a threat to many CEOs simply because of their fear of change and resistance to progress. That fear leads to an irrational rejection of remote work, instead of a thoughtful examination of where it has succeeded and what can be learned.

In her May 6th Washington Post opinion article, I worry about the erosion of office culture with more remote work, CEO Cathy Merill makes two fundamental mistakes common among fearful executives. First, it shows an ignorance of alternatives, as many organizations have worked remotely for years before the pandemic and have solved problems she considers unsolvable. She may not prefer these approaches, but her lack of awareness of them is incompetence. Second, she is infantilizing her employees by presuming they are not capable of and motivated to be productive and collaborate even when the CEO can’t see them down the hallway. 

We are over a year into a pandemic and an era of great social unrest and uncertainty, yet Merill has chosen remote work, and not other likely psychological or cultural factors, as the singular reason why workplace performance has declined. And if this wasn’t enough of an oversight, her evidence against remote work consists mostly of examples from executive friends of their self-described management incompetence. 

She offered the story of an anonymous CEO with a new but struggling employee. Yet none of the leadership team did anything about it:

 A friend at a Fortune 500 company tells of a colleague who was hired just as the pandemic hit. He struggled. He wasn’t getting the job done. It was very hard for the leadership team to tell what the problem was. Was it because he was new? Was he not up to the work? What was the specific issue? Worse, no one wanted to give him feedback over Zoom when they hadn’t even met him. Professional development is hard to do remotely.

This is simply a management failure. Does this company not have telephones? Or email? Have they never worked with a vendor or client that wasn’t in the same building? They are responsible for helping this employee regardless of what technologies are available or not. This is inept management hiding behind technological fear.  

Merill estimates that 20% of work is helping a colleague or mentoring more junior people, extra work that she feels is impossible to do remotely. This is despite dozens of popular collaboration tools and mentoring programs that work entirely online. It also denies the dozens of remote corporations like Automattic and Citrix that have vibrant work cultures where these “extra” activities are successfully done remotely.

Merill and her peers might not like these alternatives, but she never explains why. She even goes so far as to suggest that remote workers should be paid less and lose their benefits, since in her estimation they will never be able to contribute in these extra ways. She effectively threatened her own staff through the article (she apologized later after her staff revolted).

If the employee is rarely around to participate in those extras, management has a strong incentive to change their status to “contractor.” Instead of receiving a set salary, contractors are paid only for the work they do, either hourly or by appropriate output metrics. That would also mean not having to pay for health care, a 401(k) match and our share of FICA and Medicare taxes

One quality of a great CEO is the ability to look into the future and show their organization the way forward. Instead of blaming employees, they take responsibility for solving problems. For every serious issue that arises they ask themselves what can I do or change in my own behavior that can lead my staff to a better place? They diversify their network to ask “who has solved the problem my organization is facing somewhere else and what can we learn?” Or perhaps most critical of all, they invite their own employees to participate in both defining the problem and exploring ways to solve it, instead of drawing lines in the sand and assuming the only way forward is the one that makes them the most comfortable.  

Technology is often seen as a silver bullet, oversold as the magic solution that can solve hard problems. This overestimates what a technology can do, as often it’s the management culture that is the real cause. But in the case of Merill, her CEO peers and remote work, technology is being used as a scapegoat. It’s the safe target to blame as it requires no introspection or accountability. Leaders that do this become fear-driven, allowing their competitors an advantage simply by exercising curiosity and seeking new knowledge. Smart CEOs chose to invest in their work culture and grow it for the future instead of hoping for the past to return. 

Read the whole story
kerray
1481 days ago
reply
Brno, CZ
Share this story
Delete
Next Page of Stories