echemi logo
Product
  • Product
  • Supplier
  • Inquiry
    Home > Medical News > Medical Research Articles > A large-scale genome analysis algorithm

    A large-scale genome analysis algorithm

    • Last Update: 2021-02-15
    • Source: Internet
    • Author: User
    Search more information of high quality chemicals, good prices and reliable suppliers, visit www.echemi.com
    is a set of genetic variants that are side by side on the same chromosome and passed on to the next generation in a group. Their tests make it possible for people to understand the genetics of certain complex forms, such as the risk of developing disease. However, for this analysis, genomic analysis of family members (parents and their children) is often necessary, a tedious and expensive process. To overcome this problem, researchers at the University of Geneva (UNIGE) and the University of Lausanne (UNIL) and the Swiss Institute of Biometrics have developed SHAPEIT4, a powerful computer algorithm that can identify thousands of unrelated individual monotypes very quickly. The results are as detailed as when the family analysis was performed, a process that could not be carried out on such a large scale. Their tools are now available online under open source licenses and are available free of charge to the entire research team. Details can be found in Nature Communications.
    Today, the analysis of genetic data is becoming more and more important, especially in the field of personalized medicine. The number of human genomes sequenced each year is growing exponentially, with the largest database covering more than 1 million individuals. These rich data are invaluable for a better understanding of human genetic destiny, whether it is to determine the genetic weight of a particular disease or to better understand the history of human migration. However, for this to make sense, these big data must be processed electronically. "However, the processing power of computers remains relatively stable, unlike the super-rapid growth of genomic data," said Olivier Delano, a professor in the Department of Computational Biology at the Unilever School of Biology and Medicine who led the work. "
    better understanding of the role of monosomesgenotypes make it possible to understand a person's alleometan genes, the genetic variants obtained from his or her parents. However, without knowing the parents' genome, we don't know which allequencies are passed on to the child at the same time, and what combinations. "If we really want to understand the genetic basis of human variation, this thyme information is critical," explains Emmanuel Demetrizagis, a professor in the Department of Genetic Medicine and Development at the American University School of Medicine who co-oversaw the work. "
    for example, to determine the genetic risk of the disease, scientists assess whether there is more or less a genetic mutation in individuals who have developed the disease to determine its role in the disease under study." By understanding the thrombosm, we did the same type of analysis," said Emmanuel Demitzagis. This is much more accurate!the method developed by the researchers allows a very large number of genomes, some 500,000 to 1,000,000 individuals, to be processed while using standard computing power, and to determine their 10x type without knowing their ancestors or offspring. The SHAPEIT4 tool has been successfully tested on the genomes of 500,000 individuals at biobank UK, a uk-developed scientific database. "We have a classic example of what big data is," says Olivier Delano, "as long as they can be interpreted without being overwhelmed, so much data makes it possible to build very high-precision statistical models."
    transparent open source licenseresearchers decided to make their tools open to everyone under MIT's open source license: the entire code is available and can be modified as needed by the researchers. The decision was made primarily for transparency and reproducibility, as well as to inspire researchers from around the world. Olivier Delano explains: "But we only allow the use of analytical tools, not databases under any circumstances. It is then used by each person on the data he or she owns."
    This tool is more effective, faster and cheaper than the old one. This also makes it possible to limit the impact of digital environmental impacts. Very powerful computers used to process big data are really energy-hungry; (cyy123.com)
    This article is an English version of an article which is originally in the Chinese language on echemi.com and is provided for information purposes only. This website makes no representation or warranty of any kind, either expressed or implied, as to the accuracy, completeness ownership or reliability of the article or any translations thereof. If you have any concerns or complaints relating to the article, please send an email, providing a detailed description of the concern or complaint, to service@echemi.com. A staff member will contact you within 5 working days. Once verified, infringing content will be removed immediately.

    Contact Us

    The source of this page with content of products and services is from Internet, which doesn't represent ECHEMI's opinion. If you have any queries, please write to service@echemi.com. It will be replied within 5 days.

    Moreover, if you find any instances of plagiarism from the page, please send email to service@echemi.com with relevant evidence.