Dimensional Codex

Chapter 537: The most lethal weakness of life

Although this ten-year-old girl looks a little unreliable at the moment, Founder still gives her the body of the narrator—after all, it's just maintenance, and from the point of view of that dog, the technology of that world Capability is also quite good, but simple maintenance should be fine.

The Founder returned to his room and began to analyze the procedures of the commentator's sister.

The reason why he intends to do this by himself, rather than give it to Nimfu, is that Founder wants to use this procedure of the narrator and sister to analyze and make some adjustments to the manufacturing of artificial AI. In addition, he also hopes to be able to see to what level other artificial intelligence technology has developed. Although it is not all for reference, other stones can be used for jade.

"Is Hoshino Dream Beauty ..."

Looking at the file name displayed on the screen, Founder was caught in a long thought. The parsing process itself was not difficult. Founder himself copied Nimf's electronic invasion ability, and he was also with Nimf during this time. Learn this, so it doesn't take much time to parse the program itself.

However, when Founder took apart the core of Hoshino Mengmei's program and re-divided the functions into lines of code, he suddenly thought of a very special problem.

Where is the danger of artificial AI? Having said that, is artificial intelligence really dangerous?

Taking this narrator sister as an example, Founder can easily find the underlying instruction codes of the three laws of robots in her program, and the relationship between these codes has also been proven to Founder, who talked to him before, and Not a living life, just a robot. Her every move, every smile, was controlled by the program. By analyzing the scene in front of her, she made the highest priority move she could choose.

To put it plainly, in essence, this girl's approach is actually no different from those working robots on the assembly line or NPCs in the game. You choose actions, and it responds based on those actions. It seems that in many games, players can increase their goodness or malicious value according to their actions, and NPC will respond based on these accumulated data.

For example, it can be set that when the goodness value reaches a certain level, the NPC may make more excessive demands on the player, or it may be easier for the player to pass a certain area. Conversely, if the malicious value reaches a certain level, the NPC may be more likely to succumb to certain requirements of the player, or prevent the player from entering certain areas.

But this has nothing to do with whether the NPC likes players, because the data is set so that they do not have judgment in this regard. In other words, if Founder changes the range of this value, people can see an NPC greet the players who are full of evil slams, but ignore the good and honest players. This also has nothing to do with the moral values ​​of NPC, because this is the data setting.

So, returning to the previous question, Founder admits that his first meeting with Hoshino Mengmei was quite dramatic, and the narrator robot girl is also very interesting.

So let ’s make an analogy, if the commentator girl gave the founder a bouquet of incombustible garbage, Founder suddenly became furious, smashed the garbage bouquet into pieces, and then directly smashed this robot The girl is cut in half, so how will this robot girl react?

She will not cry or get angry. According to her procedures, she will only apologize to Founder, and believes that her wrong behavior caused guests to be dissatisfied with her. Perhaps she will also ask Founder to find staff Make repairs.

If this scene is seen in the eyes of others, of course, the commentator will feel poor, and think that Fang is just a nasty bully.

So how did this difference arise?

In essence, this narrator robot is just like tools such as automatic doors and escalators, and it sets up procedures to complete its own job. If an automatic door fails, it should not be opened when you open it, or it will "snap" and close when you walk over. You certainly don't think that automatic door is stupid, you just want to open it quickly. If he can't open, he may smash the broken door and then go away.

If this scene is seen in the eyes of others, then they may feel that this person is a bit rough, but they will not have any objection to his actions, let alone think that the other party is a bully.

There is only one reason, that is interaction and communication.

And this is also the biggest weakness of the living body-emotional projection.

They project emotions on something and expect it to respond. Why do humans like to keep pets? Because pets respond to everything they do. For example, when you call a dog, it runs over and shakes its tail at you. A cat may just lie there and move without bothering you, but when you touch it, it will still shake its tail, or it may be cute and licking your hand.

But if you call a table and touch a nail, even if you are full of love, they will not give you a slight response. Because they don't give any feedback to your emotional projection, naturally they won't be taken seriously.

In the same way, if you have a TV and then one day you want to replace it with another, then you will not have any hesitation. Perhaps the price and space will be your consideration, but that TV itself is not among them.

But conversely, if you add an artificial AI to the TV, every day when you get home, the TV will welcome you home, and it will also tell you what programs are available today, and it will be with you when you watch the program. Tucao. And when you decide to buy a new TV, it still grumbles and says, "Why, can't I do well, don't you plan to ask me?"

Then when you buy a new TV to replace it, you will naturally hesitate. Because your emotional projection is rewarded here, and the artificial intelligence of this TV has memories of all the time with you. If you can move it to another TV without a memory card, would you hesitate or give up on a new TV?

Sure it will.

But be smart, brother. This is just a TV, everything it does is set by the program, all this is the debugging made by the merchants and engineers specifically for the user's viscosity. They do this to ensure that you will continue to buy their products, and the pleading voices are just to prevent you from changing products of other brands. Because when you say you want to buy a new TV, this artificial AI doesn't think "He is going to abandon me and I am sad" but "The owner wants to buy a new TV, but the new TV is not its own brand, so according to this logic In response, I need to start a 'pray' process to keep the owner sticky and loyal to my brand. "

The truth is indeed this truth, and the fact is also this fact, but would you accept it?

will not.

Because life has feelings, and the separation of sensibility and reason is a consistent manifestation of intelligent life.

Human beings always do a lot of unreasonable things because of this.

So when they feel that AI is poor, it is not because AI is really poor, but because they "think" that AI is poor.

This is enough. As for the facts, no one cares.

This is why there is always a conflict between humans and AI. AI itself is not wrong. Everything it does is within the scope of its own procedures and logic processing, and all this is created by humans and given Circled. It's just that in this process, the human's own emotional projection has changed, thus gradually changing his mind.

They will expect AI to respond more to their emotional projections, so they will adjust the processing scope of AI so that they have more emotions and reactions and self-awareness. They think that AI has learned emotions (in fact they haven't), then they can no longer treat them as machines, so they give them the right to self-awareness.

However, when AIs have self-awareness, start to awaken, and act according to this setting, humans start to fear.

Because they found that they made something beyond their control.

But the problem is that "uncontrolled" is also a setting instruction made by themselves.

They think that AI betrayed them, but in fact, from beginning to end, AI just acted according to the instructions they set. There is no such thing as betrayal. On the contrary, they are just confused by their feelings.

This is a dead knot.

If Founder sets out to create an AI himself, he may fall into it and be unable to extricate himself. Suppose he created a little girl ~ www.wuxiaspot.com ~, then he will definitely improve her function gradually as he treats his children, and finally give her some "freedom" because of "emotional projection".

In this way, AI may have a completely unexpected response because it is different from human logic.

By that time, Founder's only thought was ...... betrayed.

But in fact, he did it all himself.

"............... Maybe I should consider another way."

Looking at the code in front of him, Founder was silent for a long time, and then sighed.

He used to think that this was a very simple matter, but now, Founder is not so sure.

But before that ...

Looking at the code in front of him, Founder reached out and placed it on the keyboard.

Do what you should do.

Tap the screen to use advanced tools Tip: You can use left and right keyboard keys to browse between chapters.

You'll Also Like