PLOS ONE: [sortOrder=DATE_NEWEST_FIRST, sort=Date, newest first, q=subject:"Computational techniques"]PLOShttps://journals.plos.org/plosone/webmaster@plos.orgaccelerating the publication of peer-reviewed sciencehttps://journals.plos.org/plosone/search/feed/atom?sortOrder=DATE_NEWEST_FIRST&unformattedQuery=subject:%22Computational+techniques%22&sort=Date,+newest+firstAll PLOS articles are Open Access.https://journals.plos.org/plosone/resource/img/favicon.icohttps://journals.plos.org/plosone/resource/img/favicon.ico2024-03-29T14:11:37ZEnhancing rural B&B management through machine learning and evolutionary game: A case study of rural revitalization in Yunnan, ChinaWiseong JinKwisik MinXufang HuShengchao LiXueqin WangBodong SongChengmeng Li10.1371/journal.pone.02942672024-03-28T14:00:00Z2024-03-28T14:00:00Z<p>by Wiseong Jin, Kwisik Min, Xufang Hu, Shengchao Li, Xueqin Wang, Bodong Song, Chengmeng Li</p>
The rural B&B industry is a key component of rural tourism, local economic development, and the wider rural revitalization strategy. Despite the abundance of tourism resources in Yunnan, the B&B sector faces significant challenges. It is therefore imperative to accurately identify the most pressing issues within the current B&B industry and formulate appropriate solutions to advance Yunnan’s rural revitalization efforts. This study uses recent reviews of rural B&Bs on Ctrip.com and employs machine learning techniques, including Bert, CNN, LSTM, and GRU, to identify the key management challenges currently facing Yunnan’s rural B&B industry. An analysis is then conducted to identify the key stakeholders involved in the process of improving the management of Yunnan’s B&Bs. To assess the willingness of each stakeholder to support the improvement of the rural B&B industry, this paper establishes a three-party evolutionary game model and examines the dynamic evolutionary process of management improvement within Yunnan’s rural B&B industry. Two scenarios of evolutionarily stable strategies are analyzed, and parameters impacting stakeholders’ strategy choices are simulated and evaluated. The results show that: i) Improving the "human factor" is the top priority for the current management improvement because tourists are most concerned about the emotional experience. Operators need to focus on improving service attitude and emotional experience; ii) The main stakeholders in the current management optimization process of Yunnan B&Bs are the local government, B&B operators, and tourists. Under appropriate conditions, the evolutionarily stable strategy of (1, 1, 1) is reachable. iii) variables such as additional costs, tourists’ choice preferences, and government penalties significantly affect the strategy choices of stakeholders, especially B&B operators. This paper offers effective strategies for improving B&B management that can benefit the government, B&B operators, and tourists, and ultimately contribute to the promotion of quality rural revitalization. The paper not only identifies focal areas for improving B&B management in rural Yunnan, but also provides an in-depth understanding of stakeholder dynamics. As a result, it provides valuable insights to further the cause of quality rural revitalization.MicroBundleCompute: Automated segmentation, tracking, and analysis of subdomain deformation in cardiac microbundlesHiba KobeissiJaviera JilbertoM. Çağatay KarakanXining GaoSamuel J. DePalmaShoshana L. DasLani QuachJonathan UrquiaBrendon M. BakerChristopher S. ChenDavid NordslettenEmma Lejeune10.1371/journal.pone.02988632024-03-26T14:00:00Z2024-03-26T14:00:00Z<p>by Hiba Kobeissi, Javiera Jilberto, M. Çağatay Karakan, Xining Gao, Samuel J. DePalma, Shoshana L. Das, Lani Quach, Jonathan Urquia, Brendon M. Baker, Christopher S. Chen, David Nordsletten, Emma Lejeune</p>
Advancing human induced pluripotent stem cell derived cardiomyocyte (hiPSC-CM) technology will lead to significant progress ranging from disease modeling, to drug discovery, to regenerative tissue engineering. Yet, alongside these potential opportunities comes a critical challenge: attaining mature hiPSC-CM tissues. At present, there are multiple techniques to promote maturity of hiPSC-CMs including physical platforms and cell culture protocols. However, when it comes to making quantitative comparisons of functional behavior, there are limited options for reliably and reproducibly computing functional metrics that are suitable for direct cross-system comparison. In addition, the current standard functional metrics obtained from time-lapse images of cardiac microbundle contraction reported in the field (i.e., post forces, average tissue stress) do not take full advantage of the available information present in these data (i.e., full-field tissue displacements and strains). Thus, we present “MicroBundleCompute,” a computational framework for automatic quantification of morphology-based mechanical metrics from movies of cardiac microbundles. Briefly, this computational framework offers tools for automatic tissue segmentation, tracking, and analysis of brightfield and phase contrast movies of beating cardiac microbundles. It is straightforward to implement, runs without user intervention, requires minimal input parameter setting selection, and is computationally inexpensive. In this paper, we describe the methods underlying this computational framework, show the results of our extensive validation studies, and demonstrate the utility of exploring heterogeneous tissue deformations and strains as functional metrics. With this manuscript, we disseminate “MicroBundleCompute” as an open-source computational tool with the aim of making automated quantitative analysis of beating cardiac microbundles more accessible to the community.Evolution analysis of low-carbon cooperation of service providers based on Moran process in cloud manufacturingTiaojuan HanJianfeng LuHao ZhangWentao Gao10.1371/journal.pone.02999522024-03-21T14:00:00Z2024-03-21T14:00:00Z<p>by Tiaojuan Han, Jianfeng Lu, Hao Zhang, Wentao Gao</p>
Low-carbon cooperation among cloud manufacturing service providers is one way to achieve carbon peak and neutrality. Such cooperation is related to the benefits to service providers adopting low-carbon strategies and stochastic factors such as government low-carbon policies, providers’ environmental awareness, and demanders’ low-carbon preferences. Focusing on the evolutionary process of service providers’ low-carbon strategy selection under uncertain factors, a stochastic evolutionary game model is constructed based on the Moran process, and the equilibrium conditions for low-carbon cooperation among providers are analyzed under benefit-dominated and stochastic factor-dominated situations. Through numerical simulation, the effects of the cloud platform’s cost-sharing coefficient for low-carbon investment, matching growth rate, carbon trading price, and group size on providers’ low-carbon strategy evolution are analyzed. The research results show that increasing the cloud platform’s low-carbon cost-sharing, carbon trading price, and group size can promote low-carbon cooperation among service providers. With greater low-carbon investment costs and greater stochastic factor interference, the providers’ enthusiasm for low-carbon cooperation decreases. This study fills the research gap in the low-carbon cooperation evolution of cloud manufacturing providers based on the stochastic evolutionary game and provides decision-making suggestions for governments and cloud platforms to encourage provider participation in low-carbon cooperation and for providers to adopt low-carbon strategies.Mottle: Accurate pairwise substitution distance at high divergence through the exploitation of short-read mappers and gradient descentAlisa PrusokieneNeil BoonhamAdrian FoxThomas P. Howard10.1371/journal.pone.02988342024-03-21T14:00:00Z2024-03-21T14:00:00Z<p>by Alisa Prusokiene, Neil Boonham, Adrian Fox, Thomas P. Howard</p>
Current tools for estimating the substitution distance between two related sequences struggle to remain accurate at a high divergence. Difficulties at distant homologies, such as false seeding and over-alignment, create a high barrier for the development of a stable estimator. This is especially true for viral genomes, which carry a high rate of mutation, small size, and sparse taxonomy. Developing an accurate substitution distance measure would help to elucidate the relationship between highly divergent sequences, interrogate their evolutionary history, and better facilitate the discovery of new viral genomes. To tackle these problems, we propose an approach that uses short-read mappers to create whole-genome maps, and gradient descent to isolate the homologous fraction and calculate the final distance value. We implement this approach as <i>Mottle</i>. With the use of simulated and biological sequences, <i>Mottle</i> was able to remain stable to 0.66–0.96 substitutions per base pair and identify viral outgroup genomes with 95% accuracy at the family-order level. Our results indicate that <i>Mottle</i> performs as well as existing programs in identifying taxonomic relationships, with more accurate numerical estimation of genomic distance over greater divergences. By contrast, one limitation is a reduced numerical accuracy at low divergences, and on genomes where insertions and deletions are uncommon, when compared to alternative approaches. We propose that <i>Mottle</i> may therefore be of particular interest in the study of viruses, viral relationships, and notably for viral discovery platforms, helping in benchmarking of homology search tools and defining the limits of taxonomic classification methods. The code for Mottle is available at https://github.com/tphoward/Mottle_Repo.A lightweight and secure protocol for teleworking environmentFahad AlgarniSaeed Ullah Jan10.1371/journal.pone.02982762024-03-21T14:00:00Z2024-03-21T14:00:00Z<p>by Fahad Algarni, Saeed Ullah Jan</p>
The Internet has advanced so quickly that we can now access any service at any time, from any location. As a result of this capability, People around the world can benefit from the popularity and convenience of teleworking systems. Teleworking systems, however, are vulnerable to a range of attacks; as an unauthorized user enters the open communication line and compromises the whole system, that, in turn, creates a big hurdle for the teleworkers. Professional groups have presented numerous mechanisms for the security of teleworking systems to stop any harm, but there are still a lot of security issues like insider, stolen verifier, masquerade, replay, traceability and impersonation threats. In this paper, we propose that one of the security issues with teleworking systems is the lack of a secure authentication mechanism. In order to provide a secure teleworking environment, we have proposed a lightweight and secure protocol to authenticate all the participants and make the requisite services available in an efficient manner. The security analysis of the presented protocol has been investigated formally using the random oracle model (ROM) and ProVerif simulation and informally through illustration/attack discussions. Meanwhile, the performance metrics have been measured by considering computation and communication overheads. Upon comparing the proposed protocol with prior works, it has been demonstrated that our protocol is superior to its competitors. It is suitable for implementation because it achieved a 73% improvement in computation and 34% in communication costs.BetaBuddy: An automated end-to-end computer vision pipeline for analysis of calcium fluorescence dynamics in β-cellsAnne M. AlsupKelli FowldsMichael ChoJacob M. Luber10.1371/journal.pone.02995492024-03-15T14:00:00Z2024-03-15T14:00:00Z<p>by Anne M. Alsup, Kelli Fowlds, Michael Cho, Jacob M. Luber</p>
Insulin secretion from pancreatic β-cells is integral in maintaining the delicate equilibrium of blood glucose levels. Calcium is known to be a key regulator and triggers the release of insulin. This sub-cellular process can be monitored and tracked through live-cell imaging and subsequent cell segmentation, registration, tracking, and analysis of the calcium level in each cell. Current methods of analysis typically require the manual outlining of β-cells, involve multiple software packages, and necessitate multiple researchers—all of which tend to introduce biases. Utilizing deep learning algorithms, we have therefore created a pipeline to automatically segment and track thousands of cells, which greatly reduces the time required to gather and analyze a large number of sub-cellular images and improve accuracy. Tracking cells over a time-series image stack also allows researchers to isolate specific calcium spiking patterns and spatially identify those of interest, creating an efficient and user-friendly analysis tool. Using our automated pipeline, a previous dataset used to evaluate changes in calcium spiking activity in β-cells post-electric field stimulation was reanalyzed. Changes in spiking activity were found to be underestimated previously with manual segmentation. Moreover, the machine learning pipeline provides a powerful and rapid computational approach to examine, for example, how calcium signaling is regulated by intracellular interactions.Bayesian inference of relative fitness on high-throughput pooled competition assaysManuel Razo-MejiaMadhav ManiDmitri Petrov10.1371/journal.pcbi.10119372024-03-15T14:00:00Z2024-03-15T14:00:00Z<p>by Manuel Razo-Mejia, Madhav Mani, Dmitri Petrov</p>
The tracking of lineage frequencies via DNA barcode sequencing enables the quantification of microbial fitness. However, experimental noise coming from biotic and abiotic sources complicates the computation of a reliable inference. We present a Bayesian pipeline to infer relative microbial fitness from high-throughput lineage tracking assays. Our model accounts for multiple sources of noise and propagates uncertainties throughout all parameters in a systematic way. Furthermore, using modern variational inference methods based on automatic differentiation, we are able to scale the inference to a large number of unique barcodes. We extend this core model to analyze multi-environment assays, replicate experiments, and barcodes linked to genotypes. On simulations, our method recovers known parameters within posterior credible intervals. This work provides a generalizable Bayesian framework to analyze lineage tracking experiments. The accompanying open-source software library enables the adoption of principled statistical methods in experimental evolution.Hybrid whale algorithm with evolutionary strategies and filtering for high-dimensional optimization: Application to microarray cancer dataRahila HafizSana Saeed10.1371/journal.pone.02956432024-03-11T14:00:00Z2024-03-11T14:00:00Z<p>by Rahila Hafiz, Sana Saeed</p>
The standard whale algorithm is prone to suboptimal results and inefficiencies in high-dimensional search spaces. Therefore, examining the whale optimization algorithm components is critical. The computer-generated initial populations often exhibit an uneven distribution in the solution space, leading to low diversity. We propose a fusion of this algorithm with a discrete recombinant evolutionary strategy to enhance initialization diversity. We conduct simulation experiments and compare the proposed algorithm with the original WOA on thirteen benchmark test functions. Simulation experiments on unimodal or multimodal benchmarks verified the better performance of the proposed RESHWOA, such as accuracy, minimum mean, and low standard deviation rate. Furthermore, we performed two data reduction techniques, Bhattacharya distance and signal-to-noise ratio. Support Vector Machine (SVM) excels in dealing with high-dimensional datasets and numerical features. When users optimize the parameters, they can significantly improve the SVM’s performance, even though it already works well with its default settings. We applied RESHWOA and WOA methods on six microarray cancer datasets to optimize the SVM parameters. The exhaustive examination and detailed results demonstrate that the new structure has addressed WOA’s main shortcomings. We conclude that the proposed RESHWOA performed significantly better than the WOA.Automated code development based on genetic programming in graphical programming language: A pilot studyPavel KodytekAlexandra BodzasJan Zidek10.1371/journal.pone.02994562024-03-07T14:00:00Z2024-03-07T14:00:00Z<p>by Pavel Kodytek, Alexandra Bodzas, Jan Zidek</p>
Continual technological advances associated with the recent automation revolution have tremendously increased the impact of computer technology in the industry. Software development and testing are time-consuming processes, and the current market faces a lack of specialized experts. Introducing automation to this field could, therefore, improve software engineers’ common workflow and decrease the time to market. Even though many code-generating algorithms have been proposed in textual-based programming languages, to the best of the authors’ knowledge, none of the studies deals with the implementation of such algorithms in graphical programming environments, especially LabVIEW. Due to this fact, the main goal of this study is to conduct a proof-of-concept for a requirement-based automated code-developing system within the graphical programming environment LabVIEW. The proposed framework was evaluated on four basic benchmark problems, encompassing a string model, a numeric model, a boolean model and a mixed-type problem model, which covers fundamental programming scenarios. In all tested cases, the algorithm demonstrated an ability to create satisfying functional and errorless solutions that met all user-defined requirements. Even though the generated programs were burdened with redundant objects and were much more complex compared to programmer-developed codes, this fact has no effect on the code’s execution speed or accuracy. Based on the achieved results, we can conclude that this pilot study not only proved the feasibility and viability of the proposed concept, but also showed promising results in solving linear and binary programming tasks. Furthermore, the results revealed that with further research, this poorly explored field could become a powerful tool not only for application developers but also for non-programmers and low-skilled users.BPKEM: A biometric-based private key encryption and management framework for blockchainHao CaiHan LiJianlong XuLinfeng LiYue Zhang10.1371/journal.pone.02860872024-03-04T14:00:00Z2024-03-04T14:00:00Z<p>by Hao Cai, Han Li, Jianlong Xu, Linfeng Li, Yue Zhang</p>
The fundamental technology behind bitcoin, known as blockchain, has been studied and used in a variety of industries especially in finance. The security of blockchain is extremely important as it will affects the assets of the clients as well as it is the lifeline feature of the entire system that needs to be guaranteed. Currently, there is a lack of a methodical approach to guarantee the security and dependability of the private key during its whole life. Furthermore, there is no quick, easy, or secure way to create the encryption key. A biometric-based private key encryption and management framework (BPKEM) for blockchain is proposed not only to solve the private key lifecycle manag- ement problem, but also it maintains compatibility with existing blockchain systems. For the problem of private key encryption, a biometric-based stable key generation method is proposed. By using the relative invariance between facial and fingerprint feature points, this measure can convert feature points into stable and distinguishable descriptors, then using a reusable fuzzy extractor to create a stable key. The correct- ness and efficiency of the newly proposed biometric-based blockchain encryption tech- nique in this paper has been validated in the experiments.Predicting successful draft outcome in Australian Rules football: Model sensitivity is superior in neural networks when compared to logistic regressionJacob JenningsJay C. PerrettDaniel W. WundersitzCourtney J. SullivanStephen D. CousinsMichael I. Kingsley10.1371/journal.pone.02987432024-02-29T14:00:00Z2024-02-29T14:00:00Z<p>by Jacob Jennings, Jay C. Perrett, Daniel W. Wundersitz, Courtney J. Sullivan, Stephen D. Cousins, Michael I. Kingsley</p>
Using logistic regression and neural networks, the aim of this study was to compare model performance when predicting player draft outcome during the 2021 AFL National Draft. Physical testing, in-game movement and technical involvements were collected from 708 elite-junior Australian Rules football players during consecutive seasons. Predictive models were generated using data from 465 players (2017 to 2020). Data from 243 players were then used to prospectively predict the 2021 AFL National Draft. Logistic regression and neural network models were compared for specificity, sensitivity and accuracy using relative cut-off thresholds from 5% to 50%. Using factored and unfactored data, and a range of relative cut-off thresholds, neural networks accounted for 73% of the 40 best performing models across positional groups and data configurations. Neural networks correctly classified more drafted players than logistic regression in 88% of cases at draft rate (15%) and convergence threshold (35%). Using individual variables across thresholds, neural networks (specificity = 79 ± 13%, sensitivity = 61 ± 24%, accuracy = 76 ± 8%) were consistently superior to logistic regression (specificity = 73 ± 15%, sensitivity = 29 ± 14%, accuracy = 66 ± 11%). Where the goal is to identify talented players with draft potential, model sensitivity is paramount, and neural networks were superior to logistic regression.Sustainable development of environmental protection talents training: Research on the behavior decision of government, university and enterprise under the background of evolutionary gameJinxia WangYunfeng TanLingling ZhanHongjun YangXieling LiFang GaoSiyuan Qiu10.1371/journal.pone.02985482024-02-23T14:00:00Z2024-02-23T14:00:00Z<p>by Jinxia Wang, Yunfeng Tan, Lingling Zhan, Hongjun Yang, Xieling Li, Fang Gao, Siyuan Qiu</p>
Environmental protection talents training (EPTT) is recognized as a key prerequisite for maintaining environmental sustainability, and in order to study the influence of each player on EPTT. This paper innovatively constructs a tripartite evolutionary game model of government, university and enterprise. The equilibrium points and evolutionary stabilization strategies of each participant are solved by replicating the dynamic equations, and the behaviors of each subject in EPTT are analyzed so as to clarify the behavioral characteristics and optimal strategies of the government’s participation in EPTT. The results show that enterprises occupy a more important position in influencing government decisions. The government should reduce the financial incentives for enterprises and replace them with greater policy support. Meanwhile, the government should actively promote the cultivation mechanism that integrates universities and enterprises. The results of the study can provide a decision-making basis for the government to promote the sustainable development of EPTT.Acceptance of digital phenotyping linked to a digital pill system to measure PrEP adherence among men who have sex with men with substance useHannah AlbrechtaGeorgia R. GoodmanElizabeth OginniYassir MohamedKrishna VenkatasubramanianArlen DumasStephanie CarreiroJasper S. LeeTiffany R. GlynnConall O’CleirighKenneth H. MayerCelia B. FisherPeter R. Chai10.1371/journal.pdig.00004572024-02-22T14:00:00Z2024-02-22T14:00:00Z<p>by Hannah Albrechta, Georgia R. Goodman, Elizabeth Oginni, Yassir Mohamed, Krishna Venkatasubramanian, Arlen Dumas, Stephanie Carreiro, Jasper S. Lee, Tiffany R. Glynn, Conall O’Cleirigh, Kenneth H. Mayer, Celia B. Fisher, Peter R. Chai</p>
Once-daily oral HIV pre-exposure prophylaxis (PrEP) is an effective strategy to prevent HIV, but is highly dependent on adherence. Men who have sex with men (MSM) who use substances face unique challenges maintaining PrEP adherence. Digital pill systems (DPS) allow for real-time adherence measurement through ingestible sensors. Integration of DPS technology with other digital health tools, such as digital phenotyping, may improve understanding of nonadherence triggers and development of personalized adherence interventions based on ingestion behavior. This study explored the willingness of MSM with substance use to share digital phenotypic data and interact with ancillary systems in the context of DPS-measured PrEP adherence. Adult MSM on PrEP with substance use were recruited through a social networking app. Participants were introduced to DPS technology and completed an assessment to measure willingness to participate in DPS-based PrEP adherence research, contribute digital phenotyping data, and interact with ancillary systems in the context of DPS-based research. Medical mistrust, daily worry about PrEP adherence, and substance use were also assessed. Participants who identified as cisgender male and were willing to participate in DPS-based research (N = 131) were included in this subsample analysis. Most were White (76.3%) and non-Hispanic (77.9%). Participants who reported daily PrEP adherence worry had 3.7 times greater odds (95% CI: 1.03, 13.4) of willingness to share biometric data via a wearable device paired to the DPS. Participants with daily PrEP adherence worry were more likely to be willing to share smartphone data (p = 0.006) and receive text messages surrounding their daily activities (p = 0.003), compared to those with less worry. MSM with substance use disorder, who worried about PrEP adherence, were willing to use DPS technology and share data required for digital phenotyping in the context of PrEP adherence measurement. Efforts to address medical mistrust can increase advantages of this technology for HIV prevention.Ultrasonographic assessment of abnormal fetal growth related to uteroplacental-fetal biometrics and Doppler (U-AID) indices: Protocol for multicenter retrospective cohort study trialEun-Saem ChoiHwasun LeeSe Jin LeeYoung Mi JungHo Yeon KimSeung Mi LeeKyung A. LeeHyun-Joo SeolHyun Sun KoSung Hun NaDong Wook KwakHan-Sung HwangSooran ChoiSoon-Cheol HongHye-Sung WonSuk Young KimHai-Joong KimKi Hoon Ahn10.1371/journal.pone.02980602024-02-15T14:00:00Z2024-02-15T14:00:00Z<p>by Eun-Saem Choi, Hwasun Lee, Se Jin Lee, Young Mi Jung, Ho Yeon Kim, Seung Mi Lee, Kyung A. Lee, Hyun-Joo Seol, Hyun Sun Ko, Sung Hun Na, Dong Wook Kwak, Han-Sung Hwang, Sooran Choi, Soon-Cheol Hong, Hye-Sung Won, Suk Young Kim, Hai-Joong Kim, Ki Hoon Ahn</p>
Fetal growth restriction (FGR) is one of the leading causes of perinatal morbidity and mortality. Many studies have reported an association between FGR and fetal Doppler indices focusing on umbilical artery (UA), middle cerebral artery (MCA), and ductus venosus (DV). The uteroplacental-fetal circulation which affects the fetal growth consists of not only UA, MCA, and DV, but also umbilical vein (UV), placenta and uterus itself. Nevertheless, there is a paucity of large-scale cohort studies that have assessed the association between UV, uterine wall, and placental thickness with perinatal outcomes in FGR, in conjunction with all components of the uteroplacental-fetal circulation. Therefore, this multicenter study will evaluate the association among UV absolute flow, placental thickness, and uterine wall thickness and adverse perinatal outcome in FGR fetuses. This multicenter retrospective cohort study will include singleton pregnant women who undergo at least one routine fetal ultrasound scan during routine antepartum care. Pregnant women with fetuses having structural or chromosomal abnormalities will be excluded. The U-AID indices (UtA, UA, MCA, and UV flow, placental and uterine wall thickness, and estimated fetal body weight) will be measured during each trimester of pregnancy. The study population will be divided into two groups: (1) FGR group (pregnant women with FGR fetuses) and (2) control group (those with normal growth fetus). We will assess the association between U-AID indices and adverse perinatal outcomes in the FGR group and the difference in U-AID indices between the two groups.Enhanced multimodal biometric recognition systems based on deep learning and traditional methods in smart environmentsSahar A. El_RahmanAla Saleh Alluhaidan10.1371/journal.pone.02910842024-02-15T14:00:00Z2024-02-15T14:00:00Z<p>by Sahar A. El_Rahman, Ala Saleh Alluhaidan</p>
In the field of data security, biometric security is a significant emerging concern. The multimodal biometrics system with enhanced accuracy and detection rate for smart environments is still a significant challenge. The fusion of an electrocardiogram (ECG) signal with a fingerprint is an effective multimodal recognition system. In this work, unimodal and multimodal biometric systems using Convolutional Neural Network (CNN) are conducted and compared with traditional methods using different levels of fusion of fingerprint and ECG signal. This study is concerned with the evaluation of the effectiveness of proposed parallel and sequential multimodal biometric systems with various feature extraction and classification methods. Additionally, the performance of unimodal biometrics of ECG and fingerprint utilizing deep learning and traditional classification technique is examined. The suggested biometric systems were evaluated utilizing ECG (MIT-BIH) and fingerprint (FVC2004) databases. Additional tests are conducted to examine the suggested models with:1) virtual dataset without augmentation (ODB) and 2) virtual dataset with augmentation (VDB). The findings show that the optimum performance of the parallel multimodal achieved 0.96 Area Under the ROC Curve (AUC) and sequential multimodal achieved 0.99 AUC, in comparison to unimodal biometrics which achieved 0.87 and 0.99 AUCs, for the fingerprint and ECG biometrics, respectively. The overall performance of the proposed multimodal biometrics outperformed unimodal biometrics using CNN. Moreover, the performance of the suggested CNN model for ECG signal and sequential multimodal system based on neural network outperformed other systems. Lastly, the performance of the proposed systems is compared with previously existing works.