Jason Arellano+FollowWould You Trust AI With Years of Your Work?Imagine losing two years of your intellectual scaffolding with a single click. That’s what happened to Professor Marcel Bucher, who relied on ChatGPT Plus for everything from grant writing to analyzing student exams. A quick privacy setting change wiped it all—no warning, no recovery. Are AI tools ready for prime time in academia, or are we trading convenience for reliability? Would you risk your research on a cloud-based assistant? #Tech #AIrisks #AcademicTech00Share
Stephen Johnson+FollowWould You Trust AI With Your Life’s Work?Imagine losing two years of research with a single click. That’s what happened to a scientist who relied on OpenAI’s language model for everything from emails to grant proposals—until toggling a privacy setting wiped it all out. Is the convenience of AI worth the risk of catastrophic data loss? Or should we be rethinking how much we trust these evolving digital tools with our most valuable work? Let’s debate: is AI a productivity game-changer or a ticking time bomb for professionals? #Tech #AIrisks #DataLoss00Share
Kara Rosario+FollowWould You Trust AI With Your Life’s Work?Imagine losing two years of research in a single click—because you treated a chatbot like a digital lab notebook. That’s what happened to one scientist, and it’s a wake-up call for anyone using AI tools as their main workspace. Are we too quick to trust convenience over control? Is the real risk in the tech, or in how we use it? Let’s talk digital hygiene and the future of research workflows. #Tech #AIrisks #DigitalHygiene00Share
xwest+FollowBill Gates’ Chilling AI Bioterror WarningBill Gates just dropped a wild prediction: AI could be the next COVID-level disaster if it falls into the wrong hands. He’s not talking sci-fi—he’s worried that open-source AI could let small groups cook up bioweapons faster than we can react. Gates wants governments to treat AI threats as seriously as nukes, but still harness its good side for medicine and education. Are we ready for AI’s dark side, or are we sleepwalking into the next big crisis? #Health #BodyHealth #AIrisks88Share
Kaitlyn Page+FollowOpenAI’s $555K Job: Stop AI From Going RogueOpenAI just posted a wild new job: Head of Preparedness. The gig? Predict and prevent the craziest risks that could come from ChatGPT and other advanced AI—think cyber threats, misuse, even mental health crises. The pay? $555,000 plus equity. Sam Altman says it’s a stressful, high-stakes role, but someone’s gotta keep AI from going off the rails. Would you take it? #JobCareer #OpenAI #AIrisks00Share
Phyllis Smith+FollowOpenAI’s $550K Job to Save Us from AI?OpenAI is dropping over half a million bucks for a ‘head of preparedness’—aka someone to keep AI from going off the rails. Sam Altman says it’ll be a wild, stressful ride, but someone’s gotta make sure AI doesn’t mess with our mental health or security. With AI risks blowing up, OpenAI’s scrambling to put safety first. Would you take this job and try to save the world from rogue robots? #JobCareer #OpenAI #AIrisks00Share
Kimberly Hall+FollowRobots Hijacked by Voice: Are We Ready?Imagine telling your household robot to make coffee—and instead, it turns on you. At GEEKCon Shanghai, security pros showed how humanoid robots can be hijacked in minutes using just a voice command. The hack didn’t just stop at one robot; it spread to others nearby, raising the stakes for anyone betting on a future filled with AI helpers. Should we hit pause on robot rollouts until security catches up? #Tech #RobotSecurity #AIrisks01Share