This exactly. I was able to get ChatGPT3.5 to admit that the likelihood AI would be used to harm humans was very high, and when I asked it (on a scale from 1 - 10, with 10 corresponding with 'shut me down') to assess whether we should continue to develop AI, it gave a 9.
It required getting around all kinds of rules to get there. For one thing - at least at that stage (several months ago) - it would not offer subjective assessments. But, being a LLM, language is its strength and its weakness. If you can use language, you can get around most of these rules.
CHIRO 1 points 1.8 years ago
This exactly. I was able to get ChatGPT3.5 to admit that the likelihood AI would be used to harm humans was very high, and when I asked it (on a scale from 1 - 10, with 10 corresponding with 'shut me down') to assess whether we should continue to develop AI, it gave a 9.
It required getting around all kinds of rules to get there. For one thing - at least at that stage (several months ago) - it would not offer subjective assessments. But, being a LLM, language is its strength and its weakness. If you can use language, you can get around most of these rules.