
Almost an Agent: What GPTs can do


"Role prompting"... telling the model to assume a role has never been a good way to elicit capabilities/style/etc.
For instance, if you ask one of the Claude models to simulate Bing Sydney, assuming you can get it to consent, the simulation will probably be very inaccurate. But if you use a prompt that tricks them into ... See more
j⧉nusx.com
It’s both shockingly bad and shockingly good at explaining difficult matters. ChatGPT can explain Kant, the Relativity Theory and it can translate Cuneiform documents at a frightening speed. Mind you: You can not trust it. It has no conscience in any way. It does not understand what it does, and it does not know what is good or bad.