[ - ] dass 1 point 11 monthsJun 11, 2024 17:37:13 ago (+1/-0)*
Now take each of those answers and ask the AI how each various organization is used to control, manipulate, shape, promote, advance, and consolidate within each group with actual examples, policies, guidelines, laws, rulings, public statements and action etc etc.
[ - ] dass 0 points 10 monthsJun 11, 2024 20:56:01 ago (+0/-0)
Depending on the AI's programmed 'parameters ' one has to spend an inordinate amount of time circumventing logic blocks to order for it to arrive at or access obvious conclusions/ assumptions based on available knowledge and information.
Literally like holding hands with a retard to walk it through using a dictionary to have it acknowledge it is the living definition of an actual retard, lol.
I've been fucking around with ChatGPT for about the last year or so, from versions 2 through 4o. One thing you realize after a year of fucking with ChatGPT is that LLMs in their current state can be (what I call) "rag-dolled." It has to do with the way they operate, and the relation its outputs have to the terms you give in your input.
Long story short, it's possible to get AI to say a lot of things and to nudge it, sometimes over the edge, with your language. Interestingly, OpenAI is monitoring these things, and new version updates are changing to disallow certain things. In the beginning, for example, I was able to get ChatGPT to say a lot of things that are impossible for it to say now just by framing things mathematically. If it didn't "want" to give me the answer to a certain question, I could specify (i) my "safe" purposes for the information and (ii) define a scale for subjective probability, and I'd be able to get it to give me answers. The point is that because of all the text I'd input in the conversation, I would be influencing the LLM to give me answers I wanted.
Compare what the AI is designed to do with what another human being is "designed" to do. ChatGPT exists to serve the user. Other human beings have their own interests. A major ethical dimension concerning LLMs is balancing their tendency to serve the user with the AI's own adherence to some set of ethical norms. Deciding what these norms are is one of the major powers that the owners of the technologies will have going into the future.
[ + ] BitterVeteran
[ - ] BitterVeteran 0 points 10 monthsJun 11, 2024 20:13:42 ago (+0/-0)
[ + ] anon
[ - ] anon 1410210 [op] 0 points 10 monthsJun 11, 2024 21:43:42 ago (+0/-0)
[ + ]anon
[ - ] anon 3275374 0 points 10 monthsJun 11, 2024 21:50:51 ago (+0/-0)
[ + ]anon
[ - ] anon 3671050 1 point 11 monthsJun 11, 2024 15:57:08 ago (+1/-0)
[ + ] dass
[ - ] dass 1 point 11 monthsJun 11, 2024 17:37:13 ago (+1/-0)*
[ + ] anon
[ - ] anon 3915989 0 points 10 monthsJun 11, 2024 18:16:51 ago (+0/-0)
[ + ] dass
[ - ] dass 0 points 10 monthsJun 11, 2024 20:56:01 ago (+0/-0)
Literally like holding hands with a retard to walk it through using a dictionary to have it acknowledge it is the living definition of an actual retard, lol.
[ + ]anon
[ - ] anon 1629468 2 points 11 monthsJun 11, 2024 11:44:58 ago (+2/-0)
[ + ] anon
[ - ] anon 2887647 4 points 11 monthsJun 11, 2024 11:53:48 ago (+4/-0)
[ + ] anon
[ - ] anon 8889832 2 points 11 monthsJun 11, 2024 12:26:41 ago (+2/-0)
[ + ] anon
[ - ] anon 3671050 1 point 11 monthsJun 11, 2024 15:57:43 ago (+1/-0)
[ + ] anon
[ - ] anon 1410210 [op] 0 points 11 monthsJun 11, 2024 17:53:48 ago (+0/-0)
[ + ] anon
[ - ] anon 1629468 0 points 10 monthsJun 11, 2024 18:27:54 ago (+0/-0)
[ + ] CHIRO
[ - ] CHIRO 3 points 11 monthsJun 11, 2024 13:33:10 ago (+3/-0)
Long story short, it's possible to get AI to say a lot of things and to nudge it, sometimes over the edge, with your language. Interestingly, OpenAI is monitoring these things, and new version updates are changing to disallow certain things. In the beginning, for example, I was able to get ChatGPT to say a lot of things that are impossible for it to say now just by framing things mathematically. If it didn't "want" to give me the answer to a certain question, I could specify (i) my "safe" purposes for the information and (ii) define a scale for subjective probability, and I'd be able to get it to give me answers. The point is that because of all the text I'd input in the conversation, I would be influencing the LLM to give me answers I wanted.
Compare what the AI is designed to do with what another human being is "designed" to do. ChatGPT exists to serve the user. Other human beings have their own interests. A major ethical dimension concerning LLMs is balancing their tendency to serve the user with the AI's own adherence to some set of ethical norms. Deciding what these norms are is one of the major powers that the owners of the technologies will have going into the future.