During last week’s Future Investment Initiative in Riyadh, I had the chance to answer some fascinating questions about world-building, a single person billion-dollar start-up, what the organisations of the future might look like and open vs closed models. I’ve captured a few of my thoughts and responses below.
We were asked, “What happens when everyone is working remotely and in the cloud?”
There may be borderless platforms, but there are very few borderless people or organisations. Companies need to be domiciled in the most part, and most individuals still do too. The myth of the fully nomadic world is just that, a myth.
Most people have roots, relationships, and responsibilities that keep them grounded. Yet, those same individuals can collaborate fluidly across borders. The future isn’t placeless; it’s globally connected but locally anchored.
We were asked, “What support can governments provide to ensure the visionaries and entrepreneurs of the future want to locate with them?”
Governments shouldn’t dictate how to support entrepreneurs; they’ll move too slowly and miss the mark. The best thing a state can do is create a frictionless environment with maximum freedom (minimal interference) and access to capital. Give entrepreneurs that foundation and step aside. Innovation, passion and a creative ecosystem will take care of the rest.
We were asked, “Can one person scale a billion-dollar global tech empire?”
To scale globally, you need more than code and AI tools. You generally need investors who trust you and know where you are. You need culture, something that’s ten times harder to build when teams are asynchronous or scattered across time zones. True innovation requires deep, continuous collaboration, internally and externally.
“Empires” are not built by individuals working in isolation; they’re built by networks of people aligned around shared purpose and vision. Of course, there’s always the exception to the rule and the creation of globally recognised IP has the potential to create billions of dollars in value. Harry Potter was one-person, Angry Birds was four, and Pokémon with five creators shows this is nothing new.
In the end, if you don’t need capital, time, team, IP, data or other traditional moat creators then you don’t generally have moat. The number of possible billion-dollar companies that sustain those valuations and don’t need a moat are so rare the concept is a micro consideration.
Great for the Sam Altman hype factory but super damaging to young people who think this is a reasonable objective to have.
We were asked, “Will virtual worlds take over business and entertainment?”
Technology may evolve quickly, but human nature evolves much more slowly. When we build organisations, we replicate our own traits, the need for belonging, collaboration, and visible impact. As virtual realities expand, the desire for tangible experiences will grow in parallel. Business works the same way: when the mission is real, the people show up. They want to be part of something meaningful and face-to-face will still be relevant.
In entertainment, the more immersive we get there’ll be an equal and opposite growth in demand for tangible experiences too. Which is going to be great for the industry as a whole.
We were asked, “What impacts can we expect from synthetic and real media becoming indistinguishable from each other?”
The problem isn’t whether content is real or synthetic, it’s how immersive it becomes. The coming wave of hyper-real digital experiences will bring enormous economic potential, but also new risks to mental health and identity. What do we do when our alternate synthetic worlds by far exceed the experience most have of the real world?
Regarding those who want to use synthetic with nefarious intent, watermarking and provenance systems will help, but those with malign objectives won’t comply. Distribution platforms may need to enforce digital watermarking to maintain integrity, but we’re still in a detection arms race, battling to identify synthetic content masquerading as truth.
We were asked, “What will be the most valuable human skill left after the AI dust has settled?”
My answer was creativity and collaboration which may scare some people. However, I believe creativity isn’t rare; it’s latent. Many of the most creative people evolve that skill through necessity, often born from constraint or hardship. Imagine if education made creativity and critical thinking the central metric of value. That could spark a global creative renaissance. A renaissance that might prepare us better for the efficiencies that AI is destined to bring.
I’ve given up waiting for schools and governments to adapt. Instead, we should build platforms and tools that draw out creativity for anyone willing to engage. Adults looking for a new challenge or to develop their skills and children alike. Games, immersive worlds, filmmaking, and other interactive experiences can become engines to develop creative and critical thinking, if designed with intention.
We were asked, “Regarding artistic rights and copyright protections in the age of generative AI, are we protecting artists as well as we can?”
My answer was a resounding no. Getting the creative AI question wrong isn’t a technical risk, it’s a cultural one. Artists, and the visionaries among them, stretch the edges of our shared reality. They explore what lies beyond your comfort zone and they reflect that back to us. That’s how societies grow. If we train AI on stolen art without compensating creators, we risk erasing the artist altogether. And when that happens, we don’t just lose art, we lose the mirror that shows us who we are becoming.
We were asked, “Are strategic vision and pure creativity likely to be the most valuable human traits of the future?
I believe that strategic vision and pure creativity are the same thing, but I agree with the statement that creativity will be the most valuable human trait of all, shortly followed by critical thinking and collaboration.
The system will adapt to what it cannot perform. In a world where the mundane and even the complex is automated, what remains is uniquely and irreducibly human.
We were asked, “Should Open-Source Models Be Banned?”
Definitively not, it's impossible to ban them going forward. It’s simply impossible to ban powerful models like this from here and doing so almost certainly drives them to a black market. Our core responsibility as a society is to figure out how to throttle these powerful models to ensure we measure the risk-reward balance correctly.
Open source is crucial for transparency. I feel strongly that models used for national sovereignty must be open in order to deliver transparency and the necessary open ecosystems within any nation. Having multiple models helps to hedge us against the potential bias of any single, powerful, overarching model.
My greatest concern is a concentrated centralisation of power. Allowing tech oligarchs, sorry companies, to control everything and everybody with a few private models (without our seeing the weights or potential malign intent) means we are putting too much trust in the hands of too few. People who have proven they cannot be trusted time and time again (except for Google IMO – unpopular opinion etc…) The drawbacks of open models (even if those with nefarious intent gain access) cannot possibly be worse than the scenario where a few private entities have total control and domination over our economy, society, and overall human direction. In essence, transparency is key, and keeping models open helps us achieve that and prevents the domination of a few god-like privately owned models.
It was a great event, speaking to world leaders in all domains over the three days.