Hi Folks,
I had an issue with Zoom and had to change the meeting details. sorry about that! The new link is here. Hope you can join us at 1 pm EST today (Sunday, August 3)!
What follows is an experimental prototype. I am looking for thoughts and comments as well as for people who see what’s at stake and want to get involved. If that is you, please send an email at coalitionforsanity@proton.me .
We propose that we need a citizens-led initiative — a nonpartisan social movement — to address the immediate dangers of AI. Threats include the existential threat of artificial superintelligence, the social impact of mass unemployment caused by AI, the mental health consequences of deep fakes (which also threaten social coherence and democratic decision-making), and the ecological impacts of massive resource use and energy consumption by data centers.
This is not a Red or Blue, Right or Left, issue: Potentially all of us will be negatively impacted or even exterminated if the AI experiment goes wrong and we end up with a rogue superintelligence pursuing its own goals. On the other hand, if we can master the power of AI, we could have a future without “bullshit jobs” where everybody is liberated to cultivate their unique gifts, where we join forces to address the ecological catastophe unleashed by modern industrial society through restorative and regenerative practices.
We’re holding a free, open-to-all Zoom call on Sunday at 1 pm EST to discuss the threats and the steps to building a movement to confront the negative impacts of AI development. We’re all in this together!
Please join us:

Introduction: Toward a New Social Contract
A Double-Edged Sword: AI brings extraordinary opportunities for some, and massive risks for all of us. AI development is driven by big tech and governments in a competitive “arms race,” often leaving out the public’s voice. Society has faced disruptive technologies before — and each time, we needed a new social contract to ensure benefits were distributed. We’re at that juncture with AI.
A People-Centered Approach: Civil society needs to confront AI risk, champion a social dividend, and assert democratic oversight of this powerful and potentially insanely destructive technology, which is being developed with minimal oversight or caution and is already having severely negative effects. Below, we’ve defined five demands as the framework for building a movement — this is just a prototype, a work in progress. We need common-sense safeguards, a pause on the AGI race, and a collective conversation on how AI supports the future of our human community. Above all, we need you to get involved!
1. AI Dividend or Universal Basic Income
The Challenge: AI and automation are already eliminating hundreds of thousands of jobs in many sectors. AI could displace, downsize, or eliminate tens or even hundreds of millions of jobs in the next few years. Whether it’s self-driving trucks or AI customer service, many roles are at risk. Even when jobs aren’t fully eliminated, AI drives down wages by pushing workers into lower-paying gigs. This fuels inequality and insecurity.
The Proposal: Establish a Universal Basic Income (UBI) funded by the gains of AI productivity – essentially a “social dividend.” As AI boosts economic output, some of that windfall should guarantee a baseline income for all. This isn’t a new idea: policymakers and experts worldwide have discussed UBI as a “new social contract” for the AI era. By providing everyone a financial floor, UBI would:
Protect Workers: If AI-driven automation cuts jobs or hours, people won’t fall into destitution. They’ll have breathing room to retrain, adapt or pursue other work. Notably, real-world trials suggest basic income doesn’t make people stop working; in test cases, it helped people find better jobs or start businesses, because they had less anxiety and insecurity.
Reduce Inequality: AI is creating enormous wealth — but currently it’s concentrated in a few hands (big tech companies, top executives). A UBI would redistribute some of that wealth, addressing what one economist calls the failure to spread AI’s benefits fairly. It ensures everyone gets a piece of the AI pie, not just investors and CEOs.
Reward Our Contributions: AI systems like ChatGPT are trained on vast amounts of data — text, art, knowledge — generated by ordinary people. As basic income advocate Scott Santens asks, “Why should only one or two companies get rich off … human work that we all created?” A UBI can be seen as a dividend for humanity’s trove of data and our past labor that feed AI.
Evidence: A 2023 Guardian report notes UBI can address the AI–automation threat to workers. According to an analysis from the London School of Economics, UBI has the potential to tackle “widespread job losses” from AI. Pilot projects (such as trials in Kenya or the US) showed improved well-being and economic activity with basic income. We already know that UBI can work in practice.
Anticipating Concerns: Acknowledge questions (How to fund it? Will it disincentivize work?). Briefly note funding options (e.g. tax on AI transactions or data usage, wealth taxes, carbon taxes, etc.) and that most pilots show little drop in work effort, especially if UBI is modest. UBI isn’t about getting paid to do nothing; it’s about ensuring freedom — freedom to pursue education, care for family, start a business, or simply not fear hunger if automation changes the job market.
2. Citizens’ Councils for AI Oversight
The Challenge: Decisions about AI— what gets developed, how it’s used, who regulates it — are currently made by a handful of powerful actors. These include tech CEOs, elite engineers, and government officials, often behind closed doors. The public and frontline workers have little say, yet we deal with the consequences (job losses, biased AI, surveillance, etc.). This has led to a democratic deficit in tech governance. Just as importantly, it means valuable public perspectives are being missed. Ordinary people may raise real concerns that tech insiders overlook (e.g. privacy, local job impacts, cultural values).
The Proposal: Create Citizen Councils or Assemblies on AI at various levels (local community, national, even global). These would be panels of diverse citizens — like juries — who study an AI issue with expert input and then deliberate to provide guidance or oversight. Their role could include:
Advising governments on AI regulations and ethical guidelines.
Evaluating local impacts (for instance, if a city considers automating public transit, a citizen panel could assess community effects on employment and accessibility).
Overseeing distribution of the “AI dividend” – ensuring funds (from an AI tax or dividend program) are used in ways that benefit communities.
Organizing public forums on emerging issues (like facial recognition use by police, or AI in schools), to inject democratic debate before policies are set.
Why It Helps: Citizen assemblies have a track record in tackling complex topics. For example, over 200 climate-focused citizens’ assemblies worldwide have helped drive bolder climate actions by integrating public preferences into policy. In Ireland, a citizens’ assembly famously broke political deadlock on legalizing abortion by finding common ground solutions. Likewise, on AI: involving everyday people can restore trust — citizens will know their concerns are heard — and improve decisions with on-the-ground perspectives. It counters the notion that AI policy is for “experts only.” In fact, even tech leaders have started to embrace this idea: OpenAI’s CEO Sam Altman suggested citizen input could help solve regulatory challenges. We agree: no one is better suited to say what society’s values are than society itself.
Evidence & Examples: Some countries and organizations are already exploring this. For instance, the EU funded a pilot “Citizens’ Assembly on AI” in 2023 to gather public input for EU AI policy. The UN has also entertained the concept of a global citizens’ assembly on AI, recognizing that international AI governance shouldn’t be left only to superpowers. These efforts show a growing recognition that AI has a democracy problem, and citizens’ deliberation can help fix it.
Addressing Skeptics: Some may question if laypeople can understand technical AI issues. The experience from other assemblies is that with clear explanations and expert briefings, citizens absolutely can grasp the essentials and offer meaningful input. They often ask practical questions experts miss. We don’t expect citizens to decide how to code an algorithm; rather, they weigh in on values and trade-offs (e.g. privacy vs security, innovation vs risk). This is a fundamentally democratic task. Our movement insists that AI’s trajectory shouldn’t be left to a tech elite; it must include the voices of those who will live with its outcomes — which is all of us!
3. Pause the Race to Superintelligence
The Challenge: In the tech industry, there’s a fierce “AI arms race” where companies rush to build ever more powerful AI models. Increasingly, AI companies are meshed with the military industrial complex. The worry is that this race prioritizes speed over safety. Engineers themselves admit they don’t fully understand today’s most advanced AIs. Rushing to unleash artificial general intelligence (AGI) — AI that can out-think humans in virtually every domain — without safety guarantees is courting disaster, according to many experts. The worst-case scenario? An uncontrolled superintelligent AI that no one can shut off or direct, with goals misaligned to human well-being. It sounds like sci-fi, but leading AI scientists consider it a real possibility – one that could even lead to human extinction if mishandled, as Time Magazine reports.
The Warnings: This isn’t just fearmongering by outsiders. Insiders are raising red flags. For instance:
An extensive 2022 expert survey found AI researchers on average gave a 1 in 7 chance (14%) that superintelligent AI could lead to human extinction or similarly grave catastrophe (although some say it is highly probable or even inevitable). Even a former OpenAI researcher has publicly likened deploying powerful AI to “experimenting with a plane that has a 14% crash chance” — an unacceptable risk by any normal standard.
The Future of Life Institute’s open letter (March 2023), co-signed by over 30,000 tech figures (including Elon Musk and Apple’s co-founder Steve Wozniak), urged a 6-month pause on training AI systems more powerful than GPT-4. They argued this breathing space is necessary to devise safety measures and governance.
Geoffrey Hinton, dubbed the “Godfather of AI,” quit Google in 2023 to warn the world that AI’s rapid advancement is “an existential risk” and we may “lose control over AI” if we aren’t careful.
Sam Altman, who leads the very lab creating cutting-edge AI, openly stated that superhuman AI is “the greatest threat to the continued existence of humanity.”
Long-time AI safety researchers like Eliezer Yudkowsky go further, saying “if we go ahead on this [path], everyone will die” — meaning he expects an unaligned super-AI to wipe out humanity as an “obvious” outcome if no robust safety is in place. At this point, we have no idea how to solve the “alignment problem” for a synthetic supermind that can manipulate us and out-think us.
The Proposal: Implement a precautionary pause or moratorium on the development of any AI that approaches human-level general intelligence or beyond until we have robust, validated safety measures and governance. In practice, this means calling on governments to slow the deployment of the most powerful AI models. We must halt training of models more advanced than today’s cutting-edge for a set period, establish international agreements or regulations on AI development, and require thorough pre-release testing and auditing of any AI that claims to be AGI or could rapidly self-improve. Humanity must get ahead of the problem instead of racing blindly toward potential catastrophe.
4. Mental Health, Social Coherence, and Deepfakes
Misinformation & Cognitive Collapse: AI is not only a future risk; it’s already here in our information ecosystem. “Deepfakes” — hyper-realistic fake videos or audio — and AI-generated misinformation are proliferating. This has two big impacts: political/social and psychological.
On the social front, deepfakes are used to undermine democracy and trust. They depict public figures saying or doing things they never did, or create completely fictitious events that some viewers believe. Research has shown that deepfakes can change people’s emotions and attitudes and are a potent new tool of propaganda. Worse, the existence of deepfakes creates a “liar’s dividend” — bad actors can deny real evidence by claiming “that video is probably a deepfake,” eroding the very idea of proof. This could enable corruption and crime by providing plausible deniability for anything caught on camera.
On the individual level, the spread of AI-generated lies and impersonations is causing mental health harms. Imagine seeing a pornographic video with your face pasted on it circulating online — this is a reality many (mostly women) have faced due to deepfake porn. Victims report severe anxiety, stress, and trauma from such violations. Even non-targeted misinformation has a cost: living in a “post-truth” society can create chronic stress and cynicism, as people feel they cannot trust what they see (a deeply unsettling state). Scams using AI-generated content are becoming commonplace.
The Proposal: Launch a citizen-led inquiry and action plan on AI’s social and mental health impacts. This would involve:
Research and Testimony: Bringing together mental health professionals, sociologists, educators, and affected individuals to document how AI-generated false content is impacting people and communities. Much like public health inquiries, hearing real stories (e.g. a journalist whose reputation was nearly destroyed by a fake video, or a teenager bullied with a deepfake) can drive home the urgency.
Education Campaigns and “Pre-Bunking”: We must inoculate society against misinformation. This means public education on how to spot AI fakes, and digital literacy programs in schools that teach students to critically evaluate media. The more people know about the existence of deepfakes, the less likely they are to be fooled — but currently awareness is low and overconfidence is high (studies show people think they can spot deepfakes but in reality cannot). We can push for warning labels on AI-generated media, watermarks, and funding for nonprofits that do fact-checking and deepfake detection.
Mental Health Support: Recognize that the AI era brings new forms of psychological harm. Allocate funding for counseling for victims of deepfake abuse and scams. Train law enforcement to handle these cases sensitively. Push tech platforms to respond quickly to reports of AI-generated defamation or harassment, and provide remedies (like fast takedowns) — similar to how we handle revenge porn.
Legislation and Accountability: We need laws to penalize malicious use of deepfakes — for instance, making it illegal to create or share someone’s likeness in explicit content without consent, enforcing stiff penalties for using AI impersonation to commit fraud. Some jurisdictions have begun this (e.g. California criminalized certain political deepfakes near elections), but we need a broader framework. Also, call for transparency from AI developers: models capable of generating faces or voices should have safeguards (like requiring watermarks or log files to trace fake content). Ultimately, the onus should be on AI companies to ensure their tools are not wrecking mental health or social trust. If they won’t act, regulation must step in.
Why This Matters: Society requires a baseline of shared reality and trust. If anything can be fake, and people start dismissing real events as “fake news,” we enter a dangerous spiral of cynicism and confusion. Authoritarian forces thrive in such an environment, as do scammers. By tackling this head-on, we defend the integrity of our information space and protect the psychological well-being of citizens. We must preserve the fabric of truth that democracy and community life depend on, and protecting individuals from new forms of harassment and deceit. Our movement frames this as a public safety issue: just as we demand action when a toxin spreads in our water, we must demand action when toxic misinformation and deepfakes spread in our media.
5. Sustainable Progress: AI and the Environment – A Necessary Pause
The Challenge: Hidden behind the shiny apps and “cloud AI” services is a very real, physical footprint. AI development is consuming vast amounts of energy and resources, raising the question of environmental sustainability. Key issues include:
Carbon Emissions: Training large AI models requires huge computational power. This means drawing electricity from the grid – often generated by fossil fuels. A single training run of an advanced model (like OpenAI’s GPT-3) emitted an estimated 552 tons of CO₂ — equivalent to driving a gas car 1.2 million miles! And AI companies run these processes repeatedly to tune and upgrade models. One report notes AI could become a significant contributor to global carbon emissions if unchecked. While AI can also help optimize energy use in other domains, its own carbon footprint is a growing concern.
Energy Strain: Beyond emissions, the sheer electricity demand is straining power grids. In places like Ireland, data centers (many housing AI workloads) might draw over a quarter of all electricity in coming years. That can lead to higher energy costs for everyone and even blackouts or the need for new power plants. Rapid AI expansion could conflict with climate goals if it necessitates more fossil power to keep up.
Water Usage: Big data centers guzzle water for cooling. Cooling AI supercomputers uses millions of gallons of water. It’s estimated AI data centers worldwide could soon use six times more water than a country the size of Denmark each year. This is alarming at a time of growing water scarcity (recall that a quarter of humanity already faces clean water shortages). Local communities near data centers (which house AI servers) might find their rivers running low as water prices rise.
E-Waste and Materials: AI hardware doesn’t last forever. Racks of servers get replaced every few years, contributing to electronic waste (full of hazardous substances like lead and mercury). Moreover, manufacturing the chips for AI is resource-intensive — a 2 kg chip requires 800 kg of raw materials to produce, including rare earth metals often mined in environmentally damaging ways. This means AI’s supply chain has a footprint from mining to manufacturing to disposal.
The Proposal: Initiate a citizen-led ecological review of AI’s rollout and consider a deliberate slowdown on the most resource-intensive AI projects until they can be made sustainable. Concretely, our movement calls for:
Transparency in AI Energy Use: Companies must openly report the energy and water used by training and running their AI models. What isn’t measured can’t be managed. An “AI Environmental Impact Assessment” should be as routine as an economic impact report.
Greener AI Commitments: Pressure AI labs to invest in offsets and efficiency. For example, if a company trains a new model, they should pair it with funding renewable energy equivalent to what they used, or design models to be more computationally efficient. Researchers are exploring techniques to cut down compute requirements – these should be prioritized (perhaps even mandated for models over a certain size).
Moratorium on Unsustainable Projects: If an AI application could be extremely costly for people and planet, we need a public discussion on it. For instance, do we need AI models that generate ultra-HD video in real-time if it means doubling data center emissions? We propose a temporary pause on AI expansions that significantly worsen emissions until we have a clear plan to mitigate that impact. This might mean pausing many AI features that are compute-hungry, until renewable energy catches up.
Align AI with Climate Goals: The world has agreed (in the Paris Agreement) to try to limit global warming. If AI is accelerating climate change or resource depletion, that undermines all of humanity’s other efforts. Our movement suggests convening experts in climate science, sustainability, and tech to set guidelines for “Green AI.” For example, perhaps any new AI model above a certain size must run on at least 75% renewable power, or we redirect some AI profits to climate adaptation funds as a form of compensation.
Public Dialogue on Priorities: Finally, we want society – not just corporations — to debate where AI resources should be focused. There is an opportunity cost to all those PhDs and billions of dollars chasing bigger AI models. Could some of that talent and money be better used on, say, AI for climate solutions, or even non-AI projects like renewable energy and public health? Pushing AI “faster than other societal initiatives” is a choice — one that benefits Big Tech economically — but is it the right choice for humanity right now?
The Big Picture: Innovation must be sustainable. Otherwise, we solve one problem and create two more. By addressing AI’s ecological footprint now, we ensure the tech of the future doesn’t undermine our future on this planet. Our movement aligns with environmental justice: the communities most affected by climate change (often poorer or marginalized) should not be further burdened by an AI industry gobbling up resources. We call for responsible innovation — progress that respects planetary limits, which we have already passed in many respects.
Conclusion: A Vision for a Human-Centered AI Future
We envision a future where AI is applied for the benefit of humanity and the Earth as a whole. Its profits fund everyone’s well-being, its deployment is guided by citizens’ values, it operates under strict safety standards, it enriches our information ecosystem rather than polluting it, and it runs on clean energy. In short, AI becomes a partner in a flourishing human society — not the greatest danger to our future.
If you are reading this, we encourage you to get involved in this nonpartisan movement. We are seeking to create a model for the formation of local groups that can educate their communities quickly. Then we will take action together. Help us build a broad, bipartisan coalition of citizens seeking a flourishing world.
Link to the recording?
Hi Daniel,
A few thoughts .. as you say, this isn't left / right. Technocracy proceeds through whatever "team" is technically in power. To have any chance of preventing the worst of AI / technocracy, respectful alliances need to be made with those who have warned how COVID was a test run to see how much they could get away with as far as controlling people.
Governments across the west used the same language, (and similar data manipulation), at the same time). Doesn't mean there was not also a real disease capable of harm - there was - which just makes it more cruel that early treatment was lied about all along). (https://anotherbetrayedliberal.substack.com/p/information-permaculture-and-cooties - See section on "Treatable Since Day One" )
Some of what you propose as solutions are also the same levers used to build the digital prisons -
UBI - In a world not run by psychopaths, where the last five years were not as they were, I would fully support UBI. The rationale makes sense. But, how could UBI not end up a tool of technocracy? It's what was envisioned, and spoken of, all along as one way to keep people compliant.
What just happened was a dystopian nightmare where some people had to "choose" between loosing their jobs, school, and in some cases medical care / chance at organ transplants, and taking an injection that was known all along to be extremely harmful, capable of transporting instructions to make a dangerous protein all over the body with no off switch. ( https://viralimmunologist.substack.com/ )
This harmed lower-income people the most, those who had no cushion and could not walk away from their jobs. Everything Democrats say about protecting the vulnerable rings worse than hollow until this is sincerely and deeply reckoned with.
Many who have been warning about the dangers of AI and technocracy see that part of the endgame is connecting participation in society to compliance with whatever medical products they want to put inside us. Five years ago I would have thought this "conspiracy theory". But then it happened.
Environment/ sustainability - Yes, the footprint of AI, with water, and electricity, and more, is massive - and this needs more attention. But also, we should stay vigilant about how the good-sounding mission of protecting the planet is also a trojan horse for digital prisons.
Each "team", red, blue, responds to different messaging to let the trojan horse of technocracy in. Some who opposed the insanity of electronic passports for an injection never tested to stop transmission, and the debanking of peaceful protestors and doctors who were right all along, would accept technocracy when sold with language of border security, lessening crime, and making sure only citizens vote in elections.
Others who reject technocracy when it's coming through concerns generally associated with the rightwing, would - already have - embraced technocracy when it's sold with language of "public health", and "environmental protection / sustainability".
Carbon / energy credits are one way for the digital noose to lock in, under the guise of what may seem reasonable at first (other than the fact that the predator class flies private jets to climate conferences).
https://www.travelandtourworld.com/news/article/uk-considers-carbon-passports-to-restrict-travel-and-combat-climate-change/
Also, as with "public health", often computer models "predict" dire outcomes, from climate change, yet what "experts say" is taken at face value, and if anyone wants to trace any info to its roots, like the construction of climate studies, or the multiple factors that could be influencing extreme weather - they're called a "denialist", and smeared the same way the COVID doctors were. I know for certain that with COVID, most of those smeared as "misinformation spreaders" were right all along. If people are skeptical of the climate narrative, it doesn't mean a disregard for the health of the planet, the air, the water, or even a certainty that human activity has no effect on the climate.
https://naturalselections.substack.com/p/bad-storms-bad-science
But concerns for climate have eclipsed concerns for water, air, honest science, and the sacred sovereignty of individual decisions. (It's one thing to choose to limit air travel, but it's a slippery slope to loosing healthcare or UBI or ability to be in society for not eating insect-based protein.)
Somehow, the insane environmental footprint of AI needs to be understood, which makes using AI to limit the carbon footprint of humans and pets while the AI itself is guzzling huge amounts of resources obviously hypocritical ... while also not letting concern for climate / environment / sustainability be manipulated to accept digital prisons.
Allies - There are lots of good people and organizations doing everything they can to raise awareness of and stop the coming technocratic takeover. What they have in common is an awareness that the COVID response was not just a bit much, but global crimes against humanity - and an illustration in real time of why the-powers-that-shouldn't-be can't be trusted.
To many people there are few things worse to imagine than a system where societal participation hinges on accepting whatever they consider "vaccines", or future mRNA products.
From the brilliant Joshua Stylman (co-founder of the awesome Threes Brewing in Brooklyn) -
" Consider the pipeline already emerging: wearable detects irregularity → automated medication reminder → insurance adjusts your premiums → employer questions your productivity → economic survival depends on biometric obedience. Your device doesn't just monitor; it becomes the authority on what your body needs, what treatments you require, and whether you're a financial risk." https://stylman.substack.com/p/maha-wearables-and-the-war-for-embodied
https://stylman.substack.com/p/the-great-surrender - Document from 2065 ... this is the trajectory ..
From "Escape Key", who has done deep, solid research on the global control structure / roots of technocracy - https://escapekey.substack.com/p/there-is-no-outside -
"Love as Leverage
The system's genius lies in transforming our highest virtues into control mechanisms. Our love for our children becomes support for surveillance systems that ‘protect’ them from future pandemics. Our compassion for the vulnerable becomes acceptance of restrictions that ‘save lives’. Our environmental concern becomes compliance with monitoring systems that ‘preserve the planet’.
They don't need to convince us to choose servitude over freedom. They only need to convince us to choose safety over risk, collective good over individual rights, expert wisdom over personal judgment. And they frame these choices as moral imperatives rather than political decisions. ..
The Enforcement Mechanism
Traditional authoritarian systems required extensive police apparatus and overt violence. This new approach is far more subtle and potentially more effective. Instead of sending armed agents to enforce compliance, it simply withdraws access to financial services and digital infrastructure that modern life depends upon.
Your bank account, employment, transportation, healthcare, your children's education — all increasingly depend on digital systems that can be programmed to recognise only ‘compliant’ individuals. ‘Immoral’ non-compliance doesn't result in arrest — it results in algorithmic exclusion from basic social participation. . .
Each step appears reasonable in isolation, and the policies being implemented to address these challenges are often sensible responses to what appears to be genuine problems. The danger lies not in individual policies but in their cumulative effect: systematic power transfer from democratic institutions to algorithmic systems, from local communities to international organisations, from human judgment to computational models.
We're witnessing the gradual construction of a comprehensive management system for human populations, implemented through legitimate institutions, and often supported by well-meaning people who cannot see the larger pattern. . .
The choice isn't between environmental protection and environmental destruction, or between public health and personal freedom. It's between human agency and algorithmic management, between democratic accountability and expert authority, between local adaptation and global optimisation.
We can coordinate globally while maintaining local sovereignty. We can use technology to enhance human capability rather than replace human judgment. But only if we recognise what's being constructed around us and choose deliberately to build something different."
--- The Last American Vagabond / Derrick Broze / Catherine Austin-Fitts , have been warning about this for a long time ... These folks would be natural allies ~
https://tlavagabond.substack.com/p/the-palantir-panopticon-and-trumps
(The wise people fighting technocracy see and acknowledge that the digital prison is quickly being built under Trump, but also have no illusion that the other "team" was any less determined to build a control grid)
https://tlavagabond.substack.com/ - these guys are awesome. Solid, pragmatic, rooted, and were right to not trust either team.
https://derrickbroze.substack.com/ - beautiful soul, aligned with community projects, tending the earth, healing through music, never trusted Trump nor the Dems.
https://patrickwood.substack.com/ - educating about technocracy for decades
https://doortofreedom.org/ - amazing people, good souls, rooted info, proving once again that Wikipedia is completely wrong
Unlike anyone who believed the COVID narrative of (no early treatment, vaccine is best hope), these folks have good track records.
Hope helpful,
thanks for reading,
Ellen