

Are you using 4? Because it’s much better than the earlier versions
Are you using 4? Because it’s much better than the earlier versions
That’s a fun story, but isn’t applicable to the topic here. That could very easily be verified as true or false by a secondary system. In fact you can just ask Wolfram Alpha. Ask it what are the odds that any two people share the same birthday. I just asked it that exact question and it replied 1/365
EDIT
in fact I just asked that exact same question to chatgpt4 and it also replied 1/365
There are already existing multiple different LLMs that are essentially completely different. In fact this is one of the major problems with LLMs, because when you add even a small amount of change into an LLM it turns out to radically alter the output it returns for huge amounts of seemingly unrelated topics.
For your other point, I never said bouncing their answers back and forth for verification was trivial, but it’s definitely doable.
That’s not a problem at all, I already use prompts that allow the LLM to say they don’t know an answer, and it does take that option when it’s unable to find a correct answer. For instance I often phrase questions like this “Is it known whether or not red is a color in the rainbow?” And for questions where it doesn’t know the answer it now will tell you it doesn’t know.
And to your other point, the systems may not be capable of discerning their own hallucinations, but a totally separate LLM will be able to do so pretty easily.
Give an example of a statement that you think couldn’t be verified
No, I’ve used LLMs to do exactly this, and it works. You prompt it with a statement and ask “is this true, yes or no?” It will reply with a yes or no, and it’s almost always correct. Do this verification through multiple different LLMs and it would eliminate close to 100% of hallucinations.
EDIT
I just tested it multiple times in chatgpt4, and it got every true/false answer correct.
I extremely doubt that hallucination is a limitation in final output. It may be an inevitable part of the process, but it’s almost definitely a surmountable problem.
Just off the top of my head I can imagine using two separate LLMs for a final output, the first one generates an initial output, and the second one verifies whether what it says is accurate. The chance of two totally independent LLMs having the same hallucination is probably very low. And you can add as many additional separate LLMs for re-verification as you like. The chance of a hallucination making it through multiple LLM verifications probably gets close to zero.
While this would greatly multiply the resources required, it’s just a simple example showing that hallucinations are not inevitable in final output
Wow that sounds really difficult, I’m sorry to hear that. It sounds like you maybe you’re experiencing depersonalization
I didn’t always think this way when i looked in the mirror, i used to think of what i see in the mirror as me. It’s something that happened as I grew older and started seeing reality in broader and broader ways.
Interesting, but when i read about it at an actual medical site it doesn’t sound like me. I don’t feel like I’m viewing my life from the outside with no control or connection with events around me. There’s no distress to the way i see my body as being a thing I’m inside of rather than it being me. If anything it gives me a sense of calm.
Thanks for the potential help, but I’m not trans. It’s not that my reflection looks incongruent with how i feel inside, it’s that i see my body as a physical object. Which it is. “I” am not what my body is.
Do some people not feel this way at all times? Personally i always feel like my body isn’t “me”, it’s just the functional shell that I’m living in. Like when i get in a car and go driving i don’t feel like the car is me, it’s just the functional shell I’m inside of. “Me” is my subjective sense of consciousness.
When i look in the mirror i mostly see it in the 3rd person. I see my face and i think “hey look at that guy”, and “that face could use a shave”, etc.
Get a quest, you can stream your videos to a huge virtual screen for literally 10% of the price of an apple vision
Apple vision will be a very good product …in a few years, after it’s much cheaper and more capable. But as of today, you can get an oculus quest which does a large percent of the same stuff for literally 10% of the price
I’ve always thought early spring would make much more sense than middle of winter.
Yeah this caption doesn’t make any sense
That’s not a failure of the law, it’s a failure of law enforcement
You wouldn’t be stuck with the fog, you would be able to toggle it on and off. The purpose is to make it obvious which areas you’ve already seen, so that you can know which areas of the world you still have yet to explore!
I’m in favor of being able to pay to access a library containing 99% of titles in existence, even though i don’t “own” permanent access to them. If i had to pay a higher price to “own” each individual title then i would have access to VASTLY less media. But for that monthly payment it must offer access to 99% of titles in existence, and the price must be reasonable for the amount of entertainment value i actually derive from it. Spotify is a perfect example of a 99% complete library at a reasonable price. And yeah as you point out, the service must be available on all devices, like Spotify is.
I just asked chatgpt4 that exact question copy and pasted, and here is its response: