-
Categories
-
Pharmaceutical Intermediates
-
Active Pharmaceutical Ingredients
-
Food Additives
- Industrial Coatings
- Agrochemicals
- Dyes and Pigments
- Surfactant
- Flavors and Fragrances
- Chemical Reagents
- Catalyst and Auxiliary
- Natural Products
- Inorganic Chemistry
-
Organic Chemistry
-
Biochemical Engineering
- Analytical Chemistry
-
Cosmetic Ingredient
- Water Treatment Chemical
-
Pharmaceutical Intermediates
Promotion
ECHEMI Mall
Wholesale
Weekly Price
Exhibition
News
-
Trade Service
From paragraphs of emails to 3D graphics in virtual reality environments, every piece of data spread on the Internet may be altered by noise encountered along the way, such as electromagnetic interference from microwaves or Bluetooth devices
.
These data are encoded.
Since the 1950s, most error correction codes and decoding algorithms have been designed together
.
Each code has a structure corresponding to a specific, highly complex decoding algorithm, which usually requires the use of dedicated hardware
Researchers from the Massachusetts Institute of Technology (MIT), Boston University (Boston University) and Ireland’s Maynoth University (maynoth University) have now developed the first chip that can decode any code, regardless of its structure, using a The general decoding algorithm called Guessing Random Additive Noise Decoding (GRAND) has the highest accuracy
.
By eliminating the need for multiple computationally complex decoders, GRAND achieves higher efficiency and can be applied to augmented reality and virtual reality, games, 5G networks, and connected devices that rely on processing large amounts of data with minimal delay
The MIT study was led by Cecil H.
Green and Ida Green Professor Muriel Médard in the Department of Electrical Engineering and Computer Science.
Amit Solomon and Wei Ann were graduate students at the Massachusetts Institute of Technology; Boston University Electronics and Rabia Tugce Yazicigil, assistant professor of computer engineering; Arslan Riaz and Vaibhav Bansal, both graduate students at Boston University; Ken R.
Duffy, director of the Hamilton Institute at the National University at Maynooth, Maynooth Duffy); and Kevin Galligan, a graduate student at Minos University
.
This research will be announced at the European Solid-State Device Research and Circuit Conference to be held next week
Focus on noise
Think of these codes as redundant hashes (in this case, a series of 1s and 0s) added to the end of the original data
.
The rules for creating this hash are stored in a specific codebook
When coded data is transmitted over the network, they will be affected by the noise or energy of interfering signals, which are usually generated by other electronic devices
.
When the encoded data and the noise that affects them arrive at the destination, the decoding algorithm consults its codebook and uses the structure of the hash to guess what the stored information is
Instead, GRAND guesses the noise that affects the message and uses noise patterns to infer the original information
.
GRAND generates a series of noise sequences in the order of possible occurrence, subtracts them from the received data, and checks whether the resulting codeword is in the codebook
Although noise appears to be random in nature, it has a probability structure that allows algorithms to guess what it might be
.
In a way, this is similar to troubleshooting
.
If someone drives the car into the shop, the mechanic will not draw the whole car on the drawing from the beginning
New hardware
The GRAND chip adopts a three-layer structure, starting from the simplest possible solution in the first stage, and gradually developing to longer and more complex noise patterns in the subsequent two stages
.
Each stage runs independently, which improves the throughput of the system and saves power
.
The device can also seamlessly switch between the two codebooks
.
It contains two static random access memory chips, one can crack the password, and the other loads a new codebook and then switches to decoding without any downtime
.
Researchers tested the GRAND chip and found that it can effectively decode moderately redundant codes with a length of no more than 128 bits, while the delay time is only 1 microsecond
.
Médard and her collaborators have previously proved the success of the algorithm, but this new work demonstrates the effectiveness and efficiency of GRAND on hardware for the first time
.
Médard said that developing hardware for this new type of decoding algorithm requires researchers to first abandon their preconceptions
.
"We can’t go out and reuse what we’ve done
.
It’s like a complete whiteboard
.
We have to think about every component from the beginning
.
This is a journey of rethinking
.
I think when we develop the next chip, we will Realize that this is out of habit or assumption and we can do better," she said
.
Chips of the future
Since GRAND only uses codebooks for verification, the chip can not only use legacy codes, but also codes that have not yet been introduced
.
Before the implementation of 5G, it is difficult for regulators and communications companies to reach a consensus on which codes should be used in the new network
.
Regulators ultimately chose to use two traditional codes for 5G infrastructure under different circumstances
.
Médard said that using GRAND can eliminate the need for such strict standards in the future
.
The GRAND chip may even start a wave of innovation in the coding field
.
"For reasons I'm not sure, people treat coding in awe as if it is magic
.
This process is mathematically troublesome, so people can only use existing code
.
I hope this will reshape Discuss, make it less standards-oriented, and enable people to use existing code and create new code," she said
.
Next, Médard and her collaborators plan to use a modified version of the GRAND chip to solve the soft detection problem
.
In soft detection, the received data is not very accurate
.
They also plan to test GRAND's ability to crack longer and more complex codes and adjust the structure of the silicon chip to improve its energy efficiency
.
This research was funded by the Battelle Memorial Institute and the Irish Science Foundation
.