What comes after the Gold Rush?
A conversation that I don't think happens enough is "what happens after the gold rush?" I think it's widely agreed that the shovel seller usually ends up the happiest, but after the gold rush, they might be in a tight spot.
If an owner expects to sell 50 shovels this quarter because they sold 50 last quarter, but they only sold 2 because the gold rush is under
over, they might lose all the profits buying the next wave of shovels and investing in infrastructure.
In a market where no amount of supply can satisfy the demand, rash decisions become defensible with the business judgement rule, so why not bet big? (this is not legal advice)
despite the massive chip shortage, no one is looking to build more fabs. Tech giants like Microsoft are so desperate for storage that they are sending people across the world to factories to beg for more allocation.
Why is no one making more factories and investing in infrastructure? Because the people selling the shovels got smart. Without a moat or competitive advantage, commodity pricing rules. Newly every business contends with this reality eventually, even a big bad m&a firm struggles with maintaining a moat outside of relationships with buyers.
That is the issue, if relationships are not the moat, your business can go under once someone figures out how to do it cheaper. In a race to the bottom, everyone is a loser but the consumer.
Venezuela is a perfect example of this phenomenon. Despite record oil reserves, it barely produces any oil comparatively since the price of extracting is above the going rate and it produces sour thick oil which is less desirable.
More isn’t Infinite
Why do people think that more resources=infinite resources? The fundamental problem the economy is trying to solve is that there are limited resources but unlimited demand. If demand hasn't changed, an increase in supply, even a 10x increase, will not solve the problem.
The standard of living will go up, people have air conditioning now which not even a King would have access to before. You'll still need a job:/
Source: https://www.businessinsider.com/elon-musk-retirement-saving-abundance-ai-tech-tesla-spacex-billionaires-2026-1
OpenAI or ClosedAI?
Will China surpass the US with their community oriented approach to AI? The most notable aspect of this article to me was the short line about how companies were no longer sharing information about how their models are built as competition speeds up.
The incentive is to capture as much money and therefore attention as possible in the American system which leads to secrecy, yet deepseek rocked the world being open source. This strategy isn't new, China has been stealing IP in everything from incredible deals with Tesla to develop their own electric cars, to stealing information about the f35 through cyber crime.
But it works on an individual scale as well. Chinese migrants shared recipes to allow new immigrants to share in the success of other Chinese business owners.
The next century will prove which strategy works better, China spent the last century catching up. Yet cooperation is not the only kpi for success. Nietzsche taught that delusion and persistence are what truly lead to advancement. Sometimes it takes a mad lad with too much money to solve problems people did not even consider.
Even as China seeks to dominate ai, it relies on the revolutionary discovery of others, but also the illogical dream pushing those ideas to be profitable. When ai not only has a fast improvement cycle, but also an education problem, the American way might still win.
Both have different financial tools available, China can leverage low interest rates subsidized by the government to win on price, while America can use complex ownership structures and a robust legal and finance system to create products like Uber or OpenAi.
It's going to be an interesting year...
Source: https://www.technologyreview.com/2026/01/07/1130795/what-even-is-a-parameter/amp/
Data Integrity
I’m low-key worried about data integrity over the next decade. If someone like Wikipedia doesn't step up and offer an open source dataset, it's going to be impossible to determine truth for factual claims.
I was thinking about how ai hallucinates from our perspective "wrong output not matching target response" but could be a correct answer from the AI perspective which likely takes into account general sentiment of the broader Internet. It got so bad that grok had to be pinned to Elon Musk's political takes on Twitter so it would at least be on brand if they couldn't achieve political correctness.
With cases of political actor influence like in this video, it seems that someone needs to teach ai the difference between reality and a simulation. You need real world data to back everything up with. I think a set of open source experiments that all get published to a single place in predictable format could be that dataset. If Wikipedia or even GitHub provided a space for that, ai might get past the truth issue. One dataset without all the noise.
Source: https://www.youtube.com/watch?v=CsCweuN9Ua8
Truth-ish
Everyone forgets about the sophists and the complexity of truth. There seems to be a growing understanding that ai will make people lazier and dumber because they no longer think for themselves.
Do you know what this looks exactly like from thousands of years ago? The sophists. They taught how to win an argument with no facts or evidence, just a way to be effective at communication and in court. Truth took a backseat to argumentation, but everyone survived.
I'm not doubting that some people will cease thinking and be impacted by ai negatively. That will happen. But why does no one focus on the positive and incredible ability ai gives us?
False understanding has been a problem since language was invented, it's not a new phenomenon to think you understand something only to realize you are wrong. Now it's faster, and there is a lot of money going into figuring out these problems.
Source: https://www.psychologytoday.com/us/blog/the-digital-self/202512/watching-intelligence-lose-its-friction/amp
Consciousness and the Law
Consciousness isn't an idea, it's a legal barrier. Hear me out, consciousness in philosophy is usually postulated that there has to be something it's like to be you in order to be conscious. something to experience pain or pleasure, or other abstract various complex emotions like love that isn't just processing data.
There's a thought experiment that assigned every person in China to a few neurons, and postulated that China would be conscious if the people carried out all the same processes the neurons would in a brain.
I think in the modern day, this is so very relevant. We all know that our dogs are conscious. Sure they like getting food, but they for sure love you back more than a food drive. My dog Ace jumps up and can't stop kissing me after I come home from school after months of my mom feeding him (I think she stole my dog). But we don't consider dogs conscious because then they would have rights that people don't want to deal with.
Even throughout history, different races have even been called less then because it was convenient to give them less rights. "It's amazing what you can accomplish if you throw human suffering at it" is an encroaching sentiment to looking at historical accomplishments like the pyramids, transcontinental railroad, or many other things.
No, your llm in your phone isn't conscious. But there comes a point, if ai can replicate what we do, at what point do we say it's conscious, and deal with the legal consequences?
Source: https://www.popularmechanics.com/science/a69809289/digital-brain-model/
Make Yourself Obsolete!
I'm not sure how much these sorts of roles should be celebrated, you are quite literally working yourself out of a job.
I'd rather have a much more sustainable career path that doesn't leave a skills gap in my resume for tasks that no longer exist. Maybe I'm just a boomer.
Source: https://www.cnbc.com/amp/2025/12/19/34-year-old-entrepreneur-earns-200-an-hour-training-ai-models.html
Local?
How much value do people find in running local ai? Unless privacy is a huge concern, I just don't understand why people are clamoring to get dedicated hardware.
Not only is the process incredibly expensive (the ones in this video easily exceed $40,000), but it requires constant tinkering to get a useable result.
I'm all for getting rid of subscriptions, but I'll be waiting a while before I buy hardware at this point.
Source: https://www.youtube.com/watch?v=4l4UWZGxvoc
Stats of Facts?
One of my most controversial takes is that statistics aren't real. When I was watching Star Wars, at one point c3po was giving the odds of surviving an asteroid field much to Han Solo's irritation.
But Han had a reason to be angry, the predictability of his attempt can only be compared if you assume that the former pilots had the same factors as Han. Statistics are so much more complicated than most people realize, to the point where the same statistics can be used to present both sides of an argument.
Elon has no possible way of saying xAi has a 10% likelihood of success. It is not possible to have data about something that does not exist. It rings as hollow as saying that self driving will be here next quarter. When experts all across the field agree that artificial general intelligence is a way off, and that our current methods will not get there, Elon still promises Mars for a bit of extra cash...
Source: https://www.businessinsider.com/xai-all-hands-agi-superintelligence-funding-success-optimus-space
Consequences
It's funny how things don't turn out how we intended. First the US blocks chip production from China so they start producing their own chips. Then China starts producing ai models that use less compute resources, and now people are smuggling in Nvidia chips.
Trying to limit your competitors isn't the way to win, and it's clearly not working.
Source: https://www.youtube.com/watch?v=lh0CrcPt3OM
6x Productivity Gap
The biggest question about ai productivity is not if it will work, it's what incentives will be in place.
Warren Buffett emphasizes the power of incentives with quotes like, "If you have a dumb incentive system, you get dumb outcomes," and, "Show me the incentive and I will show you the outcome."
If employers do not properly incentivise increased productivity, employees will take those productivity gains and run with it. Americans could very much use a better work life balance, but if innovation isn't incentivised, it will be crushed.
If increased productivity only leads to increased work load, why would a top performer spread their techniques?
Business owners, ask yourself what outcome you want, and figure out how to incentivise to get what you want: if you don't, someone else will take your best employees.
Source: https://venturebeat.com/ai/openai-report-reveals-a-6x-productivity-gap-between-ai-power-users-and
I’m the Best (I said so)
I probably wouldn't take Google's word that their model is the best. Since results are marginal, and the leading team created the benchmark...
This is a very interesting benchmark. As results get more accurate they will be increasingly useful. Now I just need to hope this benchmark will be relevant beyond flexing their top score. (I know their ai course wasn't).
Source: https://deepmind.google/blog/facts-benchmark-suite-systematically-evaluating-the-factuality-of-large-language-models/
Hard Problems
All this new ai buzz has brought up another classic: the hard problems paradox. In cryptography, there is a need for problems that are easy to state and compute, but are hard to reverse engineer or solve efficiently without the key.
The interesting part is that no one knows if these "hard problems" actually exist! Complexity theory has not been able to prove that any problems exist that match those criteria; therefore the paradox that the entire field of securing your data is based on an improved theory!
If that theory is wrong, the entire security industry will collapse overnight. Certain new technology such as quantum computing already makes that theory rather dubious, and many headlines about how quantum computers just solved problems that would take a supercomputer billions of years are cryptography problems!
TLDR: nothing is safe, it's just a matter of how much someone wants something and what resources they have access to.
Source: https://www.quantamagazine.org/cryptographers-show-that-ai-protections-will-always-have-holes-20251210/
Decoupling of Impressions
The recent decoupling of impressions to click through has been a drastic shift for many online companies. This change fundamentals threatens their business, but should the consumer care?
With just review sites being ai generated garbage trying to farm engagement and affiliate links, a large portion of the Internet is no longer useful. It might be time for a shakeup.
The advantage of the law is that it's slow. It takes time which allows everything to shake out and for more voices to be heard before a decision is made. It seems rather rash to attempt to create laws around a technology that no one understands the potential of yet. Let the free market handle this...
Source: https://www.bbc.com/news/articles/crl95eg33k1o.amp
Mandatory Training…
Hot take, if your training does not require active input and it's this easy for AI to automate mandatory training, you probably have other issues.
Do I like AI browsers? No, I think the privacy concerns and general usefulness just isn't there. However, just use Microsoft edge? It already has enterprise level security and privacy with most of the "benefits" of an ai browser.
Which IT team is letting users install programs? It seems the major concern from this article is to get users to not use a program since it can be dangerous, but isn't that why IT blocks user installs? I can think of half a dozen programs that can perform similar functions that also won't be installed on my work computer, what's the difference other than headline buzz?
Source: https://www.theregister.com/2025/12/08/gartner_recommends_ai_browser_ban/
American Infrastructure and AI
American needs to figure out power fast. Electric bills are spiking for everyone as our system strains under the new ai load. China already has double our capacity, and they have invested heavily in the next generation of energy independence with mass solar manufacturing.
I'm hoping nuclear steps up in a big way soon with either small scale modular fission reactors, or economically viable fusion.
Source: https://www.globaltimes.cn/page/202512/1349940.shtml
Poison Pill?
Another angle that I don't think people are considering is what happens when ai gets good at figuring out what the truth is. Everyone is concerned with ai poison pilling itself by training on ai generated content, but how much false information do people post?
The next generation of ai (or maybe the one after that) will eventually solve how to reliably determine truth without interacting with the real world. However this problem is solved, it will change how the Internet works.
Source: https://www.theguardian.com/technology/2025/dec/06/ai-research-papers
What is an “AI?”
You know what really pisses me off? It's how many different technologies we've put under the "ai umbrella," but the annoying part is that it's not technically wrong. It makes the likelihood of someone mistaking ai in vacuum cleaners with ai in medicine to be of the same quality. AI has become a marketing band aid for every quarterly goal. Look at this mit article, it promises ai to "speak objects into existence." Low and behold the only function of ai was a glorified Boolean replacement to interpret natural language.
I think that's what everyone is missing; ai is such a game changer because it can improve everything. Jenson Huang recently said that when Nvidia researched Alexnet and the advancements that it made for image recognition, Nvidia realized the underlying methodology was not just limited to image recognition. While consumer facing ai is very new, Nvidia gave open ai one of the only chips from a multi billion dollar project to openai well before anyone had heard of chat gpt. The only innovation about ai in the last decade is that it can scale with compute resources.
Moore's law has allowed ai a chance, because no one expected everyone to have data center level supercomputers in their pockets. What people don't realize about Moore's law is that you can also look at the price of compute halves in value. The abs system on your car brakes has more compute than what took us to the moon.
Source: https://news.mit.edu/2025/mit-researchers-speak-objects-existence-using-ai-robotics-1205
Billable Hours No More?
It seems more likely that the standard billing rate for these tasks will be decreased instead of ridding the entire system of billing hours. Some areas of law like divorce cases are not allowed to have outcome based compensation for attorneys.
The system I have set up for our clients is to calculate the amount of hours each task is worth in our local jurisdiction. As more firms use ai, the time for each task will decrease but will still allow the clients to work within the model rules by not charging an unreasonable rate for tasks.
Source: https://www.wsj.com/tech/ai/ai-goodbye-to-billable-hours-cba198fe
Joe Rogan and AI
About 40 minutes into this interview, Jensen Huang talks about how ai has effected REAL JOBS. One of the foundational researchers of ai predicted that radiologists would lose their jobs since ai is better at reading the x-rays. However, what actually happened was that radiology jobs increased over that time. AI was not replacing the radiologist, is was making an individual task more efficient and accurate.
I think we need to look at the law industry in a similar light. It's not replacing the lawyer, it's not holding depositions, or implementing strategy, or the million other hats lawyers wear. But it can be really good at speeding up discovery while reducing errors just like AI did for radiology.
Source: https://www.youtube.com/watch?v=3hptKYix4X8