New Ways of Human-Machine Interaction in Video Games

,
Soroosh Mashal

When you write a text message to your friend, call them, or meet them in a cafe, you are interacting with them. Any conversation can be an interaction. Interaction happens when two or more objects or agents affect one another. Some centuries ago, humans were mostly interacting with each other and their domestic animals, and thanks to evolution, we are equipped for that.
With the advent of machines in our everyday life, multiple disciplines like human factors, communication design, information technology, etc. came together to ensure that we can interact with new objects (human-machine interaction) and entities in our environment as naturally as a shepherd could with his dog.

Human-Game-Interaction

Let us take a look at this interaction in the context of video games and see if we can find a trend. We start by looking at the visual and audio output and then we move to the inputs.

The visual output was pretty simple, it was a monitor that was created by joining pixels. The first leap was the addition of colors (RGB) and the second one was the use of LCDs to make them smaller. There was not breaking up the canopy or innovative leaps anymore. Even in Virtual Reality, we still have a display with really small pixels close to our face that is adapted to our perspective. Nowadays, you can play games in 4K with 60fps, but it’s just an improved version of a monitor. When we see Augmented Reality’s proper functionality, maybe we can call it an additional leap. However, the big revolutionary leap will be something in the direction of Neuralink, in which we directly send the input to the visual neurons in the brain. Until then, we are somehow stuck at this level of immersion.

The audio output

The audio output went through the same process. We started with single speakers that could produce some single tones (think of dings while playing pinball), and we moved to speakers that could produce all sounds. The leap was taking the 3D space into account and processing the audio in multiple channels. The same process happened with headphones as they became better in supporting 3D audio and also isolation. Nowadays, you can get a 3D headset with active noise cancellation that adapts itself to the environment for better performance as a consumer product, e.g. from Jabra. The next leap right around the corner is the haptic sound. Companies like Woojer have already created a vest and a strap to help you feel the bass.

The input

Now, let’s take a look at the inputs. Electrical engineers brought a gift from heaven and bestowed it upon the first video game developers (among others): buttons. They could draw the line between zero and one. They worked in real-time and could reflect our decisions. That was enough to create a window into a virtual environment. The concept of buttons is so powerful and convenient that we are still using them whether it’s a keyboard, a mouse, or a console handle.

The next step was to have a range. Although the first joysticks were simply a better tool for clicking, the next iteration could detect the movement and send it to the computer. Wheels helped us to navigate a lot easier in a 2D environment, but the leap was the point when we moved the wheel to the mouse. FPS (First Person Shooter) games were born after we reached the required hardware threshold and they are still being played in the same manner using the same tools (well, your mouse is 99.99% using a red light rather than the sphere to navigate).

Now that we had accurate navigation in a 2D environment, it was time to make it more intuitive. The advent of smartphones (capacitor touch) gave birth to intuitive 2D control and multi-touch just made it natural. This mode of input gave birth to mobile gaming which is roughly half of the whole video game market nowadays. The next leap was 3D input and gestures. Intel RealSense, Sony PlayStation, Nintendo Wii were among the pioneers in this direction. Nowadays, Virtual Reality controls use sensors to have this integrated into HMDs and handles. More elaborate solutions like OptiTrack allow full-body movement mapping in a 3D environment in real-time.

Improving interaction

The next steps are iterations to make it more accessible and consumer-friendly. But the question is, now that we have reached real-time full control in 3D, what’s next? How can we improve our interaction? Is there something else that we didn’t look at?

These are the questions that we answer in our next blog post.