-
Categories
-
Pharmaceutical Intermediates
-
Active Pharmaceutical Ingredients
-
Food Additives
- Industrial Coatings
- Agrochemicals
- Dyes and Pigments
- Surfactant
- Flavors and Fragrances
- Chemical Reagents
- Catalyst and Auxiliary
- Natural Products
- Inorganic Chemistry
-
Organic Chemistry
-
Biochemical Engineering
- Analytical Chemistry
-
Cosmetic Ingredient
- Water Treatment Chemical
-
Pharmaceutical Intermediates
Promotion
ECHEMI Mall
Wholesale
Weekly Price
Exhibition
News
-
Trade Service
A team at Los Alamos National Laboratory has developed a new way to compare neural networks, which can be viewed in the AI's "black box" to help researchers understand neural network behavior
"The AI research community doesn't necessarily fully understand what neural networks are doing; They give us great results, but we don't know how or why," said
Neural networks are highly performing, but fragile
To improve neural networks, researchers are looking for ways
Jones, Los Alamos collaborators Jacob Springer and Garrett Kenyon, and Jones mentor Juston Moore applied their new network similarity metrics to adversarially trained neural networks and were surprising to find that when attacks increase in scale, adversarial training causes neural networks in the field of computer vision to converge to very similar representations of data, regardless of network architecture
"We found that when we trained neural networks to be robust against adversarial attacks, they started doing the same thing
Industry and academia have struggled to find the "right architecture" for neural networks, but the results of Los Alamos' team suggest that the introduction of adversarial training greatly narrows this search space
"By finding that robust neural networks are similar to each other, it's easier for us to understand how robust AI might actually work