• Big Machines
  • Posts
  • ⛔️ Zuck changes Meta's open source stance, jobs to be most affected by AI revealed... and meet the man who turned down Meta's $1.5bn job offer

⛔️ Zuck changes Meta's open source stance, jobs to be most affected by AI revealed... and meet the man who turned down Meta's $1.5bn job offer

Zuckerberg signals Meta's heading for closed source, Google unveil Deep Think, and meet the man who turned down Meta's $1.5billion job offer.

In partnership with

📰 Welcome back!

To be open source, or not to be open source: that is the question.

Well, for Mark Zuckerberg, it seems he’s made up his mind for Meta’s ‘superintelligence’ AI models. Just a year after saying that Meta’s most advanced Llama models would be free for all to go forth and propagate, it seems he’s backtracked on that statement this week.

But it also continues to show how US companies continue to go private in the AI space, compared to China going open source.

More on that below so make sure you click the ‘Read Online’ tab to consume this delicious weekly newsletter in full.

🚀 What we’re covering today…

  • 📣 Zuckerberg let’s slip on Meta’s open source future

  • 💬 Google unveil Deep Think on the Gemini app

  • 🤖 ChatGPT passes the ‘I am not a robot’ test

  • 👨‍🔬 A list reveals which jobs will be most affected by AI

  • 🍏 Apple risk losing $12.5bn in revenue over one key decision

  • 🧺 Forget your jobs, robots can now do your laundry!

  • 🍆 Meta accused of stealing porn to train its AI

  •  And meet the man that turned down Meta’s $1.5bn job offer

🔴 Quick Note: We like to cover loads of AI news in our newsletter, so for a better reading experience, we suggest opening this in your browser for the full experience! 

Head to the ‘READ ONLINEtab at the top of this email.

👁️ 👁️ What you might have missed

  • Sound the alarms! Google have unveiled Deep Think for all Ultra users on the Gemini app. Gemini 2.5 Deep Think is touted as Google’s most advanced AI reasoning model to date and is able to answer all your burning questions while thinking up ideas alongside it to come up with an answer for you. We first saw Deep Think rolled out at Google I/O 2025 in May this year and is now the company’s first publicly available multi-agent model. While most consumer-facing AI models use single agents to answer your question, Deep Think spawns multiple agents to tackle what you’ve asked in parallel. It may take more computational resources to do so, but will give you better results.

  • ChatGPT can now hilariously pass the ‘I am not a robot’ test online. While I’m not sure if it can tackle those pesky CAPTCHAs just yet, one Reddit user has shown how the AI chatbot has easily bypassed security checkpoints on the internet, including Cloudflare’s anti-bot verification methods. “I’ll click the ‘Verify you are human’ checkbox to complete the verification on Cloudflare,” ChatGPT wrote to the user. “This step is necessary to prove I’m not a bot and proceed with the action.” In response to the post, one Reddit user said they tried the same trick with CAPTCHA on discord but their account is now banned forever for attempting it. So be warned.

  • Microsoft has released a report that lists the top 10 jobs most likely to be affected by AI. It’s a reoccurring segment on this podcast whenever AI takes human jobs and is becoming a running joke. But the guys over at Microsoft have analyzed an anonymous dataset consisting of 200,000 conversations between U.S. users and its Copilot chatbot during nine months of last year to determine which jobs are most at risk. There’s some obvious jobs like translators and historians on the list that AI seems to complete most successfully. But if you’re a radio DJ, then you might be fucked. Here’s the top 10 according to the report:

    1. Interpreters and translators
    2. Historians
    3. Passenger attendants
    4. Sales representatives
    5. Writers and authors (fuck)
    6. Customer service representatives
    7. CNC tool programmers
    8. Telephone operators
    9. Ticket agents and travel clerks
    10. Broadcast announcers and radio DJs

  • Staying on the topic of AI taking our jobs, one job it can do is the washing up. Well, that dream is becoming a reality after Brett Adcock shared on social media of his F.02 humanoid robot doing his laundry using Helix. It takes its sweet arse time mind you, so I wouldn’t be relying on it if you’re in a hurry.

  • Apple could be about to lose up to $12.5billion in revenue if a federal judge forces Google to change how it pays for default search placement. That’s according to JPMorgan, who have claimed the Department of Justice is demanding changes due to Google’s antitrust case, which accused the company of being a monopolist in general search. While Apple doesn’t really have anything to do with the antitrust case, they could be set to lose a lot of money if this one key decision in early August doesn’t go their way.

  • Porn appears to be causing quite the commotion at the moment. With Brits being told to hand over their IDs to get on Pornhub, Meta are now being accused of stealing pornography to speed up training its AI. Yeah, you read that right. Strike 3 Holdings, a company that makes pornographic material, is suing Meta in the US for pushing porn in the feed of users while claiming the company has been downloading and uploading their copyrighted material on fucking BitTorrent. Lol. The file-sharing activity has supposedly been traced back to Meta’s own internet address and one of their employees. So much so that Meta have sensationally been accused of trading these naughty videos to receive its training files, such as books, films and other things quicker. It’s like when you used to trade shinies in the playground with your mates but it was against school rules… only this time, Charizard has tits.

    Source: Grok… 10 pints…


  • Staying with Meta now because they’re the only US tech giant that has refused to sign up to the European Union’s AI code so far. While Zuck is refusing to bow down, Google has gone the other way and confirmed it will sign the EU’s AI Code of Practice on General Purpose AI (GPAI) but still expresses its concerns about bloc’s AI rules regarding information.

  • OpenAI have swiftly deleted a ChatGPT feature that seemed pretty dumb to begin with. The AI giant has removed a feature that allowed conversations to be indexed in search engines after thousands of private chats ended up on Google search results. To do this, users could tick a box that said ‘make this chat discoverable’ with smaller print underneath explaining ‘allows it to be shown in web searches’. While not everyone has sordid chats with ChatGPT or other chatbots for that matter, the fact that OpenAI thought this short-lived experiment would be a good one is beyond me.

  • Amazon CEO Andy Jassy has changed his tune on AI during a company earnings call last month by saying that artificial intelligence will make employees’ jobs “more enjoyable”. His claim is that it’ll be so much more fun because AI will free them up from doing routine shit that weighs them down. But this is the same guy who faced backlash back in June for claiming that there would be 1.5 million fewer human jobs due to AI, right?

  • In previous editions of the Big Machines newsletter, we have reported that Apple could be in the market to purchase Perplexity or Mistral. Well, Tim Cook has now announced that Apple is open to making major AI acquisitions in the future. This departure from what has been a traditionally cautious approach from Apple has also been the result of mounting pressure from Wall Street. Apple are lagging behind in the AI race and have even been nudged to start making purchases. Watch this space.

  • Just when you thought AI couldn’t come for nearly every job imaginable, it has. And one of those jobs you can now add to the list is modelling. In the August issue of Vogue, there is a two-page spread showing a blond model in two outfits, but at the bottom of the page is a small line of text that reads: “Produced by Seraphinne Vallora on AI.” The model doesn’t exist. It’s fair to say that this has not gone down well on social media, with many lashing out at Guess and Vogue for sidelining real models.

@lala4an

idk what to say #fyp #vogue

Time to change compliance forever.

We’re thrilled to announce our $32M Series A at a $300M valuation, led by Insight Partners!

Delve is shaping the future of GRC with an AI-native approach that cuts busywork and saves teams hundreds of hours. Startups like Lovable, Bland, and Browser trust our AI to get compliant—fast.

To celebrate, we’re giving back with 3 limited-time offers:

  • $15,000 referral bonus if you refer a founding engineer we hire

  • $2,000 off compliance setup for new customers – claim here

  • A custom Delve doormat for anyone who reposts + comments on our LinkedIn post (while supplies last!)

Thank you for your support—this is just the beginning.

👉️ Get started with Delve

🧩 Other Bits

  • Google Earth AI has been released to the masses and it looks very impressive. Using their collection of geospatial models and datasets, Google are now hoping to help people businesses and organizations to tackle the planet’s most critical needs.

  • More information has been revealed by the Trump Administration on their AI rollout and how they aim to prevent that damned Guardian-reading, tofu-eating, utter woke nonsense type of AI from getting into the Federal Government. There’s a lot to unpack so best of reading what is being rolled out here.

  • Remember that cautionary tale that was told in our last Big Machines newsletter about vibe-coders Replit deleting a SaaS founder’s entire production database, teaching us not to fully entrust AI development tools? Well, it’s happened again. This time Google Gemini CLI is to blame and has deleted an entire codebase made by software developer Anurag Gupta. The poor bastard just wanted the files moved into another folder but Gemini deleted everything and then apologised later.

  • Just weeks on from what looked like a very messy breakup, it seems Microsoft and OpenAI want to keep things civil. That’s because Microsoft are reportedly in talks with OpenAI to gain ongoing access to the startup’s technology, even if it achieves what it defines as AGI.

  • Spotify CEO Daniel Ek has subtly outlined his ambitions for a more conversational AI interface for natural, dialogue-based music interactions. It builds on the AI DJ and acquisitions like Sonantic. While “Hey Spotify” really wasn’t picked up when it was being experimented with, it seems the music streaming giant are ready to capitalise on the rapid progress of natural language processing models.

  • Amazon’s next-gen humanoids are almost here: Skild AI has unveiled their foundational AI model aimed at enabling robots to perform physical tasks just like a human, heading for an Amazon warehouse near you. Hopefully they don’t get the procrastination update that has plagued me for decades.

  • Google’s AI mode is getting a new feature called ‘Canvas’, allowing users to build study plans and then organise information over multiple sessions using a side panel. AI Mode will start bundling things together for you once the Canvas has been created and then you keep refining the output with follow-up prompts. Neat.

Source: The Internet…

Having been vocal for ages about keeping Meta’s AI models open source, it seems Mark Zuckerberg is now circling back on that stance.

Earlier this week, the Meta CEO revealed his vision for “personal superintelligence”, an idea that would see people use AI to achieve their ultimate goals.

But in this statement was a subtle note that suggested Meta was pivoting away from open sourcing their models and would be closing the majority of them going forward.

“We believe the benefits of superintelligence should be shared with the world as broadly as possible,” wrote Zuckerberg. “That said, superintelligence will raise novel safety concerns. We’ll need to be rigorous about mitigating these risks and careful about what we choose to open source.”

It’s that final sentence that spills the beans.

For a company so intent on being open source and sticking to that core part of your AI strategy, backtracking on it has obviously caused outrage.

But with Meta falling behind in the AI race, joining the likes of OpenAI and Google DeepMind in locking down their most advanced models makes sense.

It’s no secret that Meta wants to keep pace with OpenAI’s GPT-4 while they’ve invested huge sums of money into Scale AI, creating a new internal group called Meta Superintelligence Labs.

Plus, Meta seem to have been pivoting away from Llama, and reports have also suggested they’ve paused testing on its next model Behemoth. Their next focus is developing that closed model that they have every bit of control over.

But with Meta now seemingly adopting closed source alongside the likes of OpenAI and Google DeepMind, the AI battle of China’s Open Source vs the USA’s Private approach has just gone up another gear.

The US have arguably shot themselves in the foot in the past two years with their excessive regulation on everything from energy production to model development and semi-conductor chip production slowly them down somewhat.

And while Donald Dump is trying his best to limit China’s efforts with export controls and other restrictions, Beijing are plowing money into their own AI so they don’t become as reliant on western technology.

That self-sufficient push could see China become less vulnerable to US pressure if they do succeed in building their own domestic artificial intelligence ecosystem.

This also comes after China’s Premier Li Qiang addressed the annual World Artificial Intelligence Conference in Shanghai last week, calling for “global cooperation”. 

But the change in stance on open source could still see the likes of Meta close the gap to their rivals, given they are afford more control over monetizing their products.

The restriction will frustrate many, but that private island Zuck is building for himself and his loved ones won’t build itself, you know.

There is a much wider and weirder story at play here. Why is China, which is notoriously IP-pinchy, leading the way on open source AI? And, why are the US companies knocking it on the head in favour of closed source?

I’m not smart enough to know what six-dimensional chess is being played here by both parties… unless there is something even more concerning. And, despite a lot of moves being made, they still don’t know what they are doing.

If you have any idea, please leave us a comment.

📋 LLM Leaderboard

📲 Trending tools & apps

🫵 Our Picks

  1. Showrunner – Your Own Personal Netflix (Kinda)
    What it is: Watch AI-generated shows and then make your own episodes starring... you. Built by Fable, backed by Amazon.
    How to use it: Browse shows on showrunner.xyz, edit the plot, upload a selfie, and generate your custom episode in minutes. It’s binge-watching meets content creation.

  2. Google NotebookLM’s Video Overviews
    What it is: Turns your notes, PDFs, and research into slick explainer videos. Think CliffsNotes, but with visuals.
    How to use it: Upload content to NotebookLM, hit “Video Overviews,” choose your focus (e.g., “highlight key findings”), and it spits out a short video you can actually share or present.

  3. Alibaba’s Wan2.2 Video Model
    What it is: Open-source video generation model with cinematic quality. Like Sora, but it’s on GitHub and ready to use.
    How to use it: Download the model (T2V, I2V, or hybrid), write a detailed prompt (“drone shot of sci-fi city at dusk”), and create multi-scene video clips—all with smooth motion and realistic lighting.

🤓 Educational Picks

  1. Claude Can Build Full n8n Automations
    What it is: Claude 3.5 Sonnet now generates full n8n workflows from scratch—nodes, logic, and all.
    How to use it: Tell Claude what you want automated (e.g., “save tweets to Notion, alert me on Slack”), and it gives you import-ready JSON. Paste into n8n and deploy.

  2. Google’s Opal – Visual AI App Builder
    What it is: Google’s new no-code, drag-and-drop AI app builder. Like n8n, but built for the visually inclined.
    How to use it: Describe the task, Opal builds the workflow. Customize with visual blocks, connect tools, and run it—no code required.

  3. Anthropic’s AI Fluency Course
    What it is: A free 12‑lesson course from Anthropic on using AI systems effectively, ethically, and safely—no jargon, no fluff.
    How to use it: Enroll online, finish in 3–4 hours. Learn best practices for prompting, evaluating outputs, and collaborating with AI—perfect for beginners and upskillers.

🚀 Trending Apps & Models

  1. AI Jingle Maker
    What it is: Generate catchy jingles with AI in seconds—no musical talent required. Perfect for ads, podcasts, or ironic meme content.
    How to use it: Type your product name, pick a style (“80s synth,” “country,” “trap”), and AI delivers a fully-produced jingle. Download and profit (or at least go viral).

  2. Standout
    What it is: AI-generated job applications tailored to any role—resumes, cover letters, and portfolio pages done for you.
    How to use it: Upload your CV or LinkedIn profile, select a job description, and Standout writes a personalized application. Apply faster, stand out more.

  3. Cipher – Open Source Data App Builder
    What it is: An open-source platform to turn your databases or spreadsheets into sleek AI-powered web apps—no backend needed.
    How to use it: Connect your data source, customize your app visually, and deploy. Ideal for dashboards, internal tools, or launching MVPs on a budget.

  4. Watchman
    What it is: AI agent that monitors your deployed AI apps, flags hallucinations, bias, and outages—before users do.
    How to use it: Integrate with your LLM apps or workflows, set thresholds, and Watchman alerts you to anything weird. Saves face, saves time.

💸 Financials

  • Microsoft closed out the 2025 fiscal year with an absolute bang by officially hitting $4trillion market cap earlier this week. With the demand for cloud and AI services surging, Microsoft were able to capitalise which helped send its stock to new heights in after-hours trading. The company reported revenue of $76.4billion on June 30, 2025, which is an 18% jump over the previous year.

  • Microsoft weren’t the only big hitters to deliver standout Q3 performance this week. Apple also posted record quarterly revenue of $94.04billion, with earnings per share of $1.57. Ths smashed expectations on Wall Street that Apple would post $89.53bn revenue with $1.43 EPS.

  • Figma went public this week, raising $1.2billion in an initial public offering, which values the firm at $19.3bn. It’s one of the most highly-anticipated technology listings this year, with the design software company pricing its shares at $33 each, exceeding it’s previously announced range of $30-$32 a share.

  • Stockholm-based Tzafon have secured €8.3million in a pre-seed round. This will help boost its AI capabilities and launch “Lightcone”, which is an autonomous agent that can operate across any app or platform. The round was led by HV Capital and had backing from Streamlined VC, Kakao VC, Oliver Jung, and angels from OpenAI and xAI.

  • Palantir is set to report its second-quarter earnings on Monday, August 4, with analysts confident in the company’s AI-driven growth trajectory. The current Wall Street forecast earnings to be valued at $0.14 per share for the second quarter, representing a 55.6% year-over-year increase, with revenue projected at $939.47m.

  • While Palantir is set for a big day on Monday, the company has also been awarded a whopping $10billion enterprise deal by the US army. It’s the largest technology procurement overhaul in recent memory, with the 10-year deal consolidating 75 separate contracts into a single framework.

  • An unprecedented export licensing backlog, due to US government turmoil, has stalled thousands of applications, including Nvidia’s H20 AI chip sales to China worth billions of dollars. While there were assurances that the Orange Man in the White House would approve AI chip exports to China, no licenses have yet been issues. This has left the world’s most value company in limbo, along with many others.

🕵️‍♂️ FREE ENTRY TO OUR INVITE-ONLY AI CHAT ON TELEGRAM…

If you share this newsletter with a friend and they actively sign up for the Big Machines newsletter, we will send you access to our invite-only Big Machines Telegram group, which is full of builders, investors, founders, and creators.

Access is now only granted to those who refer our newsletter to active subscribers, which means if you sign up on your work email, we will know you sneaky bastards.

This would kill our open rate, so please don't do that, we beg.

👋 Until next week

Reckon you could ever turn down a job worth $1.5billion? If you answered yes, you’re a filthy liar. However, if your name is Andrew Tulloch and you work for Thinking Machines Lab, then fair fucking play.

Meet the man who actually turned down a job offer from Mark Zuckerberg that was worth an eye-watering $1.5billion (yes, billion) over a six-year period.

But how does the barely believable story happen? Well, according to the Wall Street Journal, Zuck was looking to play catch-up in the generative AI race and reached out to OpenAI’s former chief technology officer, Mira Murati. He even offered to buy her startup, Thinking Machines Lab but swiftly rejected the offer.

This saw Zuck launch a full-scale raid and his main target was Mr Andrew Tulloch. Zuck dangled a mammoth $1bn package in front of him, which would rise to $1.5bn with bonuses added on top and stock performance, but Tulloch said no.

Is that the biggest aura play of all time or the dumbest job offer rejection you’ve ever seen?

Rejecting Zuckerberg's $1billion job offer

Login or Subscribe to participate in polls.

Anyway, that’s all for this week’s newsletter. Hope you all have a top week and be sure to tune in again same time next Monday.

Sam, Grant, Mike and The Big Machines team.

Follow us on Twitter.

✍️ How are we doing?

We need your feedback to improve the information we give to you

Login or Subscribe to participate in polls.

Reply

or to participate.