Military Technology

Chapter 362 Contradictions, Controversies, Worry

The genius remembers the address of this site in one second: [] The fastest update! No ads!

Moreover, no matter it is under strong light or in a dark environment, it cannot affect the transparency of the screen, thereby affecting the wearer's vision. This requires that the transparent screen must adjust the display intensity of its picture according to the environment.

Enhancing the display screen will inevitably affect the transparency of the screen, thereby affecting the wearer's vision. And reducing the intensity of the display picture will affect the quality of the picture, thereby affecting the viewing experience.

This is a contradictory problem of opposites, which must be solved according to local conditions. When and in which usage scenarios should the display be enhanced, and when should the intensity of the display be reduced. This requires not only human control, but also intelligent automatic adjustment of the system according to the relevant wearing environment.

In addition to displaying technical problems, there is also the ability to process information and data, which is also divided into hardware and software.

First of all, in terms of hardware, AR glasses can be different from VR glasses. Because of the different environments and scenarios used, AR glasses need to be worn for a long time and adapt to various environments. Therefore, the volume and weight of AR glasses must be as light as possible.

The most ideal state is a pair of glasses, or not much bigger than glasses, nor much heavier, too big or too heavy will affect the wearing experience.

It is also paradoxical how to place a large number of hardware devices while being as light and small as possible, which has extremely high requirements for the integration and integration capabilities of the entire hardware.

At present, it is common to install these hardware devices on the temples of the frame at both ends of the glasses, but even so, they are still very bulky and inconvenient to wear.

Due to size and weight limitations, it is destined that the power of the hardware device cannot be too strong, which also greatly limits the computing and processing capabilities of the system. How to improve the information and data processing capability of the system is also a difficult problem that the R\u0026D team must solve.

Although with the promotion and popularization of 5G technology, the high-speed dissemination of information data is no longer a problem. But how to receive and process these massive amounts of information in a timely manner is also a very difficult problem.

It's okay in a single environment, but what if it's in a complex environment.

Assuming a scene, when you are walking on a bustling cross street, all surrounding buildings, billboards, and even some facilities are equipped with AR interpretation functions. This also means that your AR glasses have to receive a large amount of AR data information at once and display it on your screen at the same time, which can have great requirements on the processor and system.

The last piece of the puzzle, that is in terms of interactive systems. VR can be controlled using wearable glove sensors or hand-held joysticks.

AR is not enough, because AR has to adapt to a variety of environments and scenarios, so there must be a simpler and more direct method.

At present, there are a total of three ways that come to mind. First, the first eye tracking control technology.

The eyeball capture sensor is used to capture eyeball rotation, blink, and eyeball focus center in real time for interactive control. This technology has been implemented and has a good application performance on many devices.

Typically, this technology will also be used in conjunction with head motion sensors. For example, when you look up, the content on the screen slides up; when you look down, the content on the screen slides down. When you look left or right, the content displayed on the screen will also slide left and right accordingly.

When you blink, you can confirm the selection and other operations. For example, blink once to confirm, double to undo, etc., which is equivalent to the left and right buttons of the mouse.

And the focal point that the eyes focus on also happens to correspond to the cursor of the mouse. Wherever you look, the focus is there, as flexible as the mouse cursor.

The second way is to use gesture control technology, using sensors to capture the movement changes of the front gestures for interactive control.

For example, if you slide your hand up and down, the content displayed on the screen will also slide up and down.

The same goes for left and right. Finger pulling can also move the screen position, or zoom in and out of the screen. Click OK with your finger, undo with a wave, and so on.

Gesture recognition control technology is currently developing rapidly, but there are still some difficulties in recognizing gesture changes in high-speed movements. This requires that the sensor must have the ability to accurately recognize and capture gestures, and at the same time, the processor can quickly and accurately convert these gestures into relevant operating instructions.

Another point is that everyone's gesture operation posture is different, or each person's operation gestures are also different every time. Even if it is a gesture, there will be some changes in different time and environmental scenarios.

This brings certain difficulties to the capture and identification of the system, and therefore requires the system to have good fault tolerance.

The third interaction method looks more sci-fi, that is, the brain-computer control technology that has become popular recently. Simply put, it is to control the operation through thinking and imagination.

When we imagine a thing or a picture or an object, the brain waves released are different. The brain-computer control technology is to use our different brain waves to control and interact with the device.

For example, after your brain imagines an idea of ​​moving forward, the brain will release such a brain wave, and the brain-computer system will recognize this brain wave and convert it into a corresponding electrical signal command to control the device to move forward.

At present, this technology has been applied to some fields, including the brain-computer controlled wheelchair for high paraplegic patients. Patients can control the wheelchair through the brain to stop motion and so on.

There is also the use of this brain-computer control technology for text-related input. It is said that the input speed can reach 70 characters per minute, which can be said to be very fast.

Although this technology is developing rapidly, it is also a hot area of ​​research by technology giants from various countries. But the controversy about this technology has not stagnated, and has even become more intense.

And an important core question discussed by everyone is, is this technology safe? The first is whether it is safe to use. Wearing this sensor to capture brain waves for such a long time will damage the brain, affect intelligence, nervous system, and have any impact on health.

The second type is that since brain-computer equipment can read brain waves, it means that it can also input brain waves. Internet security is becoming more and more serious now. If a hacker masters the relevant technology and then uses brain-computer control technology to invade the human brain, wouldn't it be possible to steal the information and secrets in the human brain.

Or more seriously, what if hackers use this method to transfer transplanted viruses to human brains? Is it really necessary to restart the human brain, or format it directly? Or install an anti-virus software in the brain and set up a firewall?

"Add bookmark for easy reading"

Tap the screen to use advanced tools Tip: You can use left and right keyboard keys to browse between chapters.

You'll Also Like