- Joined
- Jul 14, 2005
- Messages
- 20,027
- Reaction score
- 49,800
- Points
- 148
Gour's fav bar there is Wang Chung's...
I again feel that your characterization of your point shifts back and forth between different propositions depending upon whether you are touting benefits, or addressing my critiques.
You talked about the things you were creating being equal to humans. Equal rights.
You have talked about creating a self-aware, conscious (again, your words) entity.
And that entity would be so intelligent that it would expand in scale to go far beyond what we can do. You're not just talking about building a human brain model.
Add to that all the other things, like lawyer, teacher, nurse, child care worker/nanny etc., that require actual compassion and empathy to maximize performance of that role.
You want cops that don't have truly human emotions/understandings,
and the ability to read people, situations, life events?
A synthetic mental health counselor...you really think patients will react the same to such a thing as they would to a regular human unless the two were all but indistinguishable? I sure don't.
The knowledge that the person you are talking to is also a human, and truly understands what that means, is pretty critical.
And of course, whether or not you think the robots performing those tasks need sentience isn't the point.
The point is whether the singularity -- your self-aware A.I. -- decides to give it to them.
Ah, okay, this is the crux of it. If it is sentient, self-aware, and capable of designing/expanding its own scale to become more intellectually powerful to the point where it can solve almost all issues of technology quickly, which I believe you also said....
Then whether or not you, or any other humans, want it to control humanity will be irrelevant. If it decides it wants to govern/control humanity, it will. And it will be sufficient connected via whatever passes for the internet that it could take control of whatever it wanted, including legions of robots, even making them self-aware if it chose.
Looks like I picked a good time to start sniffing glue....Actually, as much as I initially disliked the ending from the perspective of Simon, I absolutely loved/admired the game-design choice. The ending really elevated the game.
Right, I got that. She said "we lost". But even that assumption -- that their consciousness had a 50% chance of transferring -- was an assumption made the by the woman/game designers.
it is entirely possible (very likely, in my opinion) that your consciousness would always remain with your body, and the scanned version would always have a consciousness of its own.
Assuming it was possible, and you were at end of life otherwise, sure. But again, assuming that the AI/singularity that designed it in the first place has any interest in letting you actually do that is a huge initial hurdle that we cannot overcome.
What sleep issue? either sleep is required because brains are biological and need rest, or it's required for long term memory storage. Either way, simulated brain can either sleep or not sleep. No time at all is needed to "solve it".
Couple of points Damien:
1) I think you and Q-Tip are both a bit confused here. You would not keep a synthetic brain from developing self-awareness if it were a complete simulation of the brain... We would expect consciousness and self-awareness to be emergent from the brain at this point.
2) You do not need complete human neural network simulations to do the kinds of calculations KI and I are describing. The reason we would want a complete simulation of the brain is for other uses like medicine, longevity, and eventually replacing the biological brain either in part or whole.
3) While we don't really understand consciousness, we do understand where consciousness stems from; what parts of the brain are active in conscious thinking. We understand that we do not need to add these portions of the brain to any neural network simulation that requires human-like cognitive ability without self-awareness and self-determination.
4) Aren't you a computer programmer? You should research neural network programming a bit to get a better picture as to why this isn't as dangerous as some people might think.
Even if you had a synthetic brain that required sleep, why wouldn't it just sleep?
These kinds of limitations would likely not be present unless specifically simulated. It's like simulating an NES CPU with all the quirks and flaws vs having a "more-perfect" emulation that doesn't have these flaws built-in.
In essence, you could likely operate with or without sleep depending upon how closely you wanted to emulate biological human function.
But again, this hasn't anything to do with AI doing problem solving... it's more to do with procreative AI; i.e., biological humans creating artificial humans.
Not sure what you mean by this.... If we accept your hypothesis about sleep being required for long-term memory storage (to which I agree in part), then I'm not sure why sleep wouldn't be simulated within the emulated neural network for one; and for two, if neurons do not require as significant a time to form and reform connections in the neural network to form memories, then one might imagine far less sleep (or no sleep) being required; and lastly, you're thinking about this in terms of writing an application that uses some kind of database using a typical programming paradigm -- this is not how an AI modeled after the brain would behave.
Damian, an AI the perfectly simulated the human brain would store memory the exact same way you store memory -- and you don't have an RDBMS in your skull..
We've been working on this technology for decades. KI4MVP referenced the massive project that's working on creating a complete simulation at present.
Understood on your first 3 points.
Not a programmer. I work on the strategic/managerial side. I'm more concerned with the "what" and the "why" than the "how". I'll read up on neural programming. I'm trying to build up my chops in that area (coding in general)...but it's hard to motivate myself if I'm not solving a problem that I care about solving.
"Damian, an AI the perfectly simulated the human brain would store memory the exact same way you store memory -- and you don't have an RDBMS in your skull.. "
This actually gets at what I'm talking about in reference to sleep: How do you and I store memory? How are those memories formed? How are they prioritized? Under my hypotheses, the answer to those questions would entail an understanding of sleep's function.
Yes, we've been working on this for decades....how far along are we? I'm ignorant to that subject.
In order to understand how memories are built and maintained, how they interact with each other, how decisions are made based on experience, all of these things (under my hypothesis) would be directly influenced by sleep, or lack thereof. If we are going to create a machine that can perform these tasks, we must understand how the information used to make a decision is stored and maintained. That would require a thorough understanding of sleep and its function.
What does it mean to "reverse engineer a brain from the cellular level?" We would have to engineer how these cells interact, no? Going down that logical path, wouldn't a thorough understand of sleep be required?
Actually Damien; this is the point that KI and I are making -- you would NOT need to fully understand the brain biologically in order to create a simulation of it. Many of the properties of the human brain, mind, and consciousness would be emergent from a smaller set of initial conditions and rules. Yet having the smaller set of information would be sufficient to "turn on" an artificial brain where a consciousness could emerge from.
For example, if you had an artificial neuron, and again, it operated the same way as a biological one would; you would not need to understand the sleep process and it's function in it's entirety in order to swap out the biological cell for the artificial one.
No.
In computer science, reverse engineering can be done on black box functions where you have absolutely no idea how the function is implemented. The only thing you do have is a set of inputs and outputs and thus you can perform the necessary operational transformations to get from point A to point B.
So in other words, if you have this artificial neural network that follows the correct ruleset, has the correct interconnects, then there should be no functional difference between the simulation and the real thing. So without having a complete understanding of how the thing works on a macroscopic scale, you've duplicated it nonetheless.
To put this another way, there are emergent qualities of the human brain and consciousness that we will not fully understand before such an AI were to be developed, completed, and activated.
I understand.
Basically, if you're able to exactly replicate the way a neuron squirts electricity, in conjunction with every other neuron, you've got a functioning brain.
This is assuming that each neouron can, in a practical sense, be studied and replicated individually as a part of the greater whole.
Right?
slight side topic, how much potential does 3D XPoint memory have to change the way computers are designed and function. The old model is RAM and a disk drive with part of the disk drive set aside for swap space. data has to be moved to/from the disk drive a block at a time.
3D XPoint can change all of this. They talk about using it to replace RAM and to replace SSD drives. But why not do both at once. Why not have all of memory and storage directly accessible by the CPU instead of transferring it into "ram" and setting up swap space on the drive? A 64 bit processed can theoretically address 16 million terabytes of data. Not 16 terabytes, 16 million terabytes. I know current processors are limited in the amount of memory they can directly access, but with just a few more pins, the maximum amount of addressable memory can easily jump from multiple gigabytes to multiple terabytes.
It would seem that such a change could have a massive impact on the kinds of things we're talking about here, where disk latency is potentially a huge issue.
Burger-flipping robot replaces humans on first day at work
A burger-flipping robot has replaced humans at the grill of CaliBurger CREDIT: MISO ROBOTICS
9 MARCH 2017 • 10:42AM
A burger-flipping robot has just completed its first day on the job at a restaurant in California, replacing humans at the grill.
Flippy has mastered the art of cooking the perfect burger and has just started work at CaliBurger, a fast-food chain.
The robotic kitchen assistant, which its makers say can be installed in just five minutes, is the brainchild of Miso Robotics.
“Much like self-driving vehicles, our system continuously learns from its experiences to improve over time,” said David Zito, chief executive officer of Miso Robotics.
“Though we are starting with the relatively 'simple' task of cooking burgers, our proprietary AI software allows our kitchen assistants to be adaptable and therefore can be trained to help with almost any dull, dirty or dangerous task in a commercial kitchen — whether it's frying chicken, cutting vegetables or final plating.”
Cameras and sensors help Flippy to determine when the burger is fully cooked, before the robot places them on a bun. A human worker then takes over and adds condiments.
More Flippy robots will be introduced at CaliBurgers next year, with the aim of installing them in 50 of their restaurants worldwide by the end of 2019.
CaliBurger say the benefits include making “food faster, safer and with fewer errors”.
View: https://www.youtube.com/watch?v=lMIkWyiJp0k
Burger-flipping robot replaces humans on first day at work
A burger-flipping robot has replaced humans at the grill of CaliBurger CREDIT: MISO ROBOTICS
9 MARCH 2017 • 10:42AM
A burger-flipping robot has just completed its first day on the job at a restaurant in California, replacing humans at the grill.
Flippy has mastered the art of cooking the perfect burger and has just started work at CaliBurger, a fast-food chain.
The robotic kitchen assistant, which its makers say can be installed in just five minutes, is the brainchild of Miso Robotics.
“Much like self-driving vehicles, our system continuously learns from its experiences to improve over time,” said David Zito, chief executive officer of Miso Robotics.
“Though we are starting with the relatively 'simple' task of cooking burgers, our proprietary AI software allows our kitchen assistants to be adaptable and therefore can be trained to help with almost any dull, dirty or dangerous task in a commercial kitchen — whether it's frying chicken, cutting vegetables or final plating.”
Cameras and sensors help Flippy to determine when the burger is fully cooked, before the robot places them on a bun. A human worker then takes over and adds condiments.
More Flippy robots will be introduced at CaliBurgers next year, with the aim of installing them in 50 of their restaurants worldwide by the end of 2019.
CaliBurger say the benefits include making “food faster, safer and with fewer errors”.
View: https://www.youtube.com/watch?v=lMIkWyiJp0k