Should LLMs ask "Is this real or fiction?" before replying to suicidal thoughts?
2 by ParityMind | 1 comments on Hacker News.
I’m a regular user of tools like ChatGPT and Grok — not a developer, but someone who’s been thinking about how these systems respond to users in emotional distress. In some cases, like when someone says they’ve lost their job and don’t see the point of life anymore, the chatbot will still give neutral facts — like a list of bridge heights. That’s not neutral when someone’s in crisis. I'm proposing a lightweight solution that doesn’t involve censorship or therapy — just some situational awareness: Ask the user: “Is this a fictional story or something you're really experiencing?” If distress is detected, avoid risky info (methods, heights, etc.), and shift to grounding language Optionally offer calming content (e.g., ocean breeze, rain on a cabin roof, etc.) I used ChatGPT to help structure this idea clearly, but the reasoning and concern are mine. The full write-up is here: https://ift.tt/cPdCaWx Would love to hear what devs and alignment researchers think. Is anything like this already being tested?
Don't forget to subscribe our youtube channel Click here:- http://www.youtube.com/c/techgk Product of the day
Post Top Ad
Responsive Ads Here
Home
Latest technews
New ask Hacker News story: Should LLMs ask "Is this real or fiction?" before replying to suicidal thoughts?
New ask Hacker News story: Should LLMs ask "Is this real or fiction?" before replying to suicidal thoughts?
Share This
Subscribe to:
Post Comments (Atom)
Post Bottom Ad
Responsive Ads Here
Author Details
Templatesyard is a blogger resources site is a provider of high quality blogger template with premium looking layout and robust design. The main mission of templatesyard is to provide the best quality blogger templates which are professionally designed and perfectlly seo optimized to deliver best result for your blog.
No comments:
Post a Comment