Showcase [AI], The College of Computing, and the Institute of Computing and Cybersystems hosted a poster competition for undergraduate and graduate students in all departments and majors.
Competition Winners:
- 1st Place: Kirk Thelen, graduate student, Computer Science
- Meeting Digital Learners Where They Are: Design of a Sociotechnical System for Remote Digital Assistance
- 2nd Place: Niusen Chen, graduate student, Computer Science
-
A Secure Plausibly Deniable System for Mobile Devices against Multi-snapshot Adversaries
-
- 3rd Place: 5-way tie
- Shashank Pathrudkar
-
Electronic structure of bulk materials using Machine Learning
-
- Ali Awad
- Shivayogi Akki
- Benchmarking Model Predictive Control and Reinforcement Learning for Legged Robot Locomotion
- Brandon Woolman
- Disrupting the Visuomotor Connection: Mild Cognitive Impairment and a Visually Guided Reaching Task
- Zongguang Liu
-
Entanglement-Free Path Planning for Tethered Autonomous Underwater Vehicles (T-AUVs)
-
- Shashank Pathrudkar
Abstract:
Deterministic machine learning models have been used successfully to bypass Kohn-Sham
(KS) density functional theory (DFT) simulations. However, these models cannot provide
a robust measure of uncertainty in predictions which is essential, especially in the
case of larger systems where the actual DFT simulations are unavailable. Toward this,
we propose a Bayesian Neural Network-based machine learning model that can predict
electron density for metals as well as provide uncertainty estimates for the predicted
electron density. The machine learning model maps the local atomic environment in
the simulation cell to the electron density. The mapping is generated through Bayesian
Neural Networks that have stochastic parameters as opposed to deterministic neural
networks, enabling uncertainty quantification for the outputs. Uncertainty quantification
provides us a way to assess the confidence in the prediction of electron density at
scales where DFT data is not available to compare against the ML prediction. By considering
an example of Aluminum we show that the model can provide bounds for electron density
fields for systems not used in training like large systems orders of magnitude greater
than training data, systems with vacancy defects, and systems with grain boundary
defects. We anticipate that the model can provide a way to assess the prediction of
electron density for systems that are inaccessible for KS-DFT.
Abstract:
Current machine learning-based (ML) models usually attempt to utilize all available
patient data to predict patient outcomes ignoring the associated cost and time for
data
acquisition. Approximately 6.2 million adults in the USA have heart failure (HF).
Moreover, the total cost for treating HF in 2020 was estimated at $43.6 billion. Cardiac
resynchronization therapy (CRT) is a standard treatment for HF by coordinating the
functions of the left and right ventricles with the average cost of $60,000 per patient.
The purpose of this study is to create a multi-staged ML model to
predict CRT response for HF patients. The model exploits uncertainty quantification
to recommend the additional collection of single-photon emission computed tomography
myocardial perfusion imaging (SPECT MPI) variables if baseline clinical variables
and features from electrocardiogram (ECG) are not sufficient. By modeling the sequential
medical testing phases for CRT admission, SPECT MPI acquisition can be skipped if
the patient’s ECG and clinical variables records strongly indicates non-response.
Abstract:
A physics informed neural network (PINN) incorporates the physics of a system by satisfying its boundary value problem through a neural network’s loss function. The PINN approach has shown great success in approximating the map between the solution of a partial differential equation (PDE) and its spatio-temporal coordinates. However, the PINN’s accuracy suffers significantly for strongly non-linear and higher-order time-varying partial differential equations such as Allen Cahn and Cahn Hilliard equations. To resolve this problem, a novel PINN scheme is proposed that solves the PDE sequentially over successive time segments using a single neural network. The key idea is to re-train the same neural network for solving the PDE over successive time segments while satisfying the already obtained solution for all previous time segments. Thus it is named as backward compatible PINN (bc-PINN). To illustrate the advantages of bc-PINN, the Cahn Hilliard and Allen Cahn equations are solved. These equations are widely used to describe phase separation and reaction–diffusion systems. Additionally, two new techniques have been introduced to improve the proposed bc-PINN scheme. The first technique uses the initial condition of a time-segment to guide the neural network map closer to the true map over that segment. The second technique is a transfer learning approach where the features learned from the previous training are preserved. We have demonstrated that these two techniques improve the accuracy and efficiency of the bc-PINN scheme significantly. It has also been demonstrated that the convergence is improved by using a phase space representation for higher-order PDEs. It is shown that the proposed bc-PINN technique is significantly more accurate and efficient than PINN.
Abstract:The ground state electron density — obtainable using Kohn-Sham Density Functional
Theory
(KS-DFT) simulations — contains a wealth of material information, making its prediction
via ma-
chine learning (ML) models attractive. However, the computational expense of KS-DFT
scales
cubically with system size which tends to stymie training data generation, making
it difficult to
develop quantifiably accurate ML models that are applicable across many scales and
system config-
urations. Here, we address this fundamental challenge using Bayesian neural networks
and employ
transfer learning to leverage the multi-scale nature of the training data. Our ML
models employ
descriptors involving simple scalar products, comprehensively sample system configurations
through
thermalization, and quantify uncertainty in electron density predictions. We show
that our mod-
els incur significantly lower data generation costs while allowing confident — and
when verifiable,
accurate — predictions for a wide variety of bulk systems well beyond training, including
systems
with defects, different alloy compositions, and at unprecedented, multi-million-atom
scales.
Abstract:
Landslide segmentation on Earth has been a challenging computer vision task, in which the lack of annotated data or limitation on computational resources has been a major obstacle in the development of accurate and scalable artificial intelligence-based models. However, the accelerated progress in deep learning techniques and the availability of data-sharing initiatives have enabled significant achievements in landslide segmentation on Earth. With the current capabilities in technology and data availability, replicating a similar task on other planets, such as Mars, does not seem an impossible task anymore. In this research, we present C-PLES (Contextual Progressive Layer Expansion with Self-attention), a deep learning architecture for multi-class landslide segmentation in the Valles Marineris (VM) on Mars. Even though the challenges could be different from on-Earth landslide segmentation, due to the nature of the environment and data characteristics, the outcomes of this research lead to a better understanding of the geology and terrain of the planet, in addition, to providing valuable insights regarding the importance of image modality for this task. The proposed architecture combines the merits of the progressive neuron expansion with attention mechanisms in an encoder-decoder-based framework, delivering competitive performance in comparison with state-of-the-art deep learning architectures for landslide segmentation.
Abstract:
Ensuring integrity of the data outsourced to a decentralized cloud storage system is a critical but challenging problem. To provide this guarantee, current decentralized cloud storage systems rely on blockchain and smart contracts to establish a trusted entity which can audit the storage peers. This would result in a significant overhead as each smart contract is run on all the miners of the blockchain. By leveraging trusted hardware components equipped with the storage peer, this work has designed a unique self-auditing protocol which can ensure data integrity in the decentralized cloud without relying on the blockchain and smart contracts.
Abstract:
We quickly form and maintain theories (frames) about ambiguous situations, but
there are circumstances where we need to change these frames. The Data-Frame
Sensemaking Model suggests questioning as a first step to changing a frame.
Counterfactual thinking is a potential strategy to encourage questioning
one’s frame through prompting consideration of mutability and alternatives.
The present research tests the effectiveness of reading a counterfactual
statement on participants’ questioning of a dominant frame and consideration
of an alternate frame in four ambiguous scenarios. Results show participants
questioning their initial preference for a dominant frame in response to the
counterfactual.
Abstract:
Mobile computing devices have been used broadly to
store, manage and process critical data. To protect confidentiality
of stored data, major mobile operating systems provide full disk
encryption, which relies on traditional encryption and requires
keeping the decryption keys secret. This however, may not be
true as an active attacker may coerce victims for decryption
keys. Plausibly deniable encryption (PDE) can defend against
such a coercive attacker by disguising the secret keys with decoy
keys. Leveraging concept of PDE, various PDE systems have been
built for mobile devices. However, a practical PDE system is still
missing which can be compatible with mainstream mobile devices
and, meanwhile, remains secure when facing a strong multisnapshot adversary. This
work fills this gap by designing the first mobile PDE system against the multi-snapshot
adversaries.
Abstract:
O-linked β-N-acetylglucosamine (O-GlcNAc) is a distinct monosaccharide modification on serine (S) or threonine (T) residues of nucleocytoplasmic and mitochondrial proteins. O-GlcNAc modification (i.e., O-GlcNAcylation) is involved in the regulation of diverse cellular processes including transcription, epigenetic modifications, and cell signaling. Despite the great progress in experimentally mapping O-GlcNAc sites, it is still a challenging task in many cases. There is an unmet need to develop robust prediction tools that can effectively locate the presence of O- GlcNAc sites in the protein sequences of interest. In this work, we performed a comprehensive evaluation of embeddings from three prominent sequence-based large protein language models (pLMs): Ankh, ESM-2, and ProtT5 for prediction of O-GlcNAc sites. Upon investigation, the ensemble approach that integrates embedding from these three models, which we call LM- OGlcNAc-Site, outperforms the models trained on these individual language models as well as existing predictors, in almost all parameters evaluated. The precise prediction of O-GlcNAc sites will facilitate probing O-GlcNAc site-specific functions of proteins in physiology and diseases. Moreover, these findings also indicate the effectiveness of combined uses of multiple protein language models in post translational modification prediction and also open up exciting avenues for further research and exploration in other protein downstream tasks.
Abstract:
With chat-based communication being at the forefront of online communication, for many internet users, the importance of distinguishing tones and efficacy in communicating online is more relevant than ever. Some internet communities have developed systems to ensure that vocal tones and implications can be understood in chat-based media such as X (formerly known as Twitter), Reddit, Discord, etc. These communities are not only aiding in effective communication practices with limited body language and tone inflection but also making chat-based communication with tone indicators a growing practice across communities and online forums. Our goal is to determine the efficiency and practicality of these tone indicators for those unfamiliar with the concept and their willingness to utilize this tool in their everyday use of chat-based communication.
Abstract:
The Tracer Method is a novel, MTU developed, design method to support better interface
design by combining eye tracking with cognitive task analysis interviews.
Eye-tracking often requires extensive interpretation on the part of the researcher,
especially in human-centered computing environment.
Using the method, critical decisions from interviews provide boundaries for the eye-tracking
analysis. and explanations for complex problems
Abstract:
Given a set of points P, in two dimensions, the skyline of P is the subset of points that are not dominated by any other point in P. A point p = (p.x, p.y) is said to be dominated by a point q = (q.x, q.y) if q.x > p.x and q.y >= p.y, or q.x >= p.x and q.y > p.y. We will say that a point p, is part of the skyline if no other point in P dominates p. Expanding the scope to include color attributes associated with each point introduces a new aspect to the problem. Beyond identifying the skyline, our focus is on determining the distinct colors present within it.
Abstract:
In the history of access control, nearly every system designed has relied on the operating system for enforcement of its protocols. If the operating system (and specifically root access) is compromised, there are few if any solutions that can get users back into their system efficiently. In this work, we have proposed a method by which file permissions (specifically EXT’s Access Control Lists, or ACL) can be efficiently rolled back after a catastrophic failure of permission enforcement. Our key idea is to leverage the out-of-place update feature of flash memory in order to collaborate with the flash translation layer to efficiently return those permissions to a state pre-dating the failure.
Abstract:
This research introduces a novel local path planning algorithm designed to ensure entanglement-free and collision-free navigation for Tethered Autonomous Underwater Vehicles (T-AUVs). Entanglement represents a critical challenge encountered by T-AUVs, characterized by the physical tangling of the tether with external objects or the tethers of other vehicles, potentially constraining mobility and compromising functionality. However, if entanglement is well managed, the tether enables bottomless power supplies and stabilized communication between the robots in harsh underwater environments. To deal with the issue, we propose a local planner that generates a continuous, collision-free, and entanglement-free path, allowing the robot to navigate safely. To validate our approach, we have established a simulated environment within Gazebo and deployed a T-AUV. We have implemented and tested a 2D spatial version of the proposed local planner within this simulation environment. As the vehicle follows its trajectory, the planner dynamically generates "free space bubbles" along its path from the current position to the endpoint of the trajectory. This research focuses on enhancing the safety and reliability of multiple T-AUVs, which is particularly valuable for applications such as underwater exploration, surveillance, and environmental monitoring, where collaborative T-AUV operations are essential.
Abstract:
This research is dedicated to refining design parameters governing the docking head shape of mobile robots to optimize their rendezvous capabilities. An efficient docking mechanism provides significant benefits in operating multi-robot systems, enabling seamless refueling or collaborative motion between robots in scenarios such as collaborative exploration missions or autonomous fleet operations. A primary focus of this study is implementing a passive adjustment mechanism within the docking system, recognizing the inherent difficulties associated with precise control in diverse scenarios, particularly in unpredictable outdoor environments. We have identified and characterized optimized head shapes through theoretical analysis and extensive numerical simulations, enhancing the robot's docking performance and efficiency. These tailored designs empower the robot to accommodate a broad spectrum of initial poses, effectively mitigating positioning, orientation, and control errors while expanding the operational range.
Abstract:
Intelligent audio/visual warnings can affect driver behavior. Using evidence from road tests and driving simulations of intelligent systems, people’s driving behavior was studied to examine their eye gaze behavior and braking behavior when an audio/visual warning activates. This idea has many applications in other research domains, including rail crossings and autonomous driving.
Abstract:
In the rapidly evolving landscape of maritime technology, private and government organizations have an increased demand for unmanned surface vessels (USVs). Autonomous platforms can offer enhanced efficiency, safety, and environmental sustainability when utilized effectively. There exists an urgent need for the development of resilient and versatile autonomous surface vessels for applications in transport and reconnaissance. The sensory perception of autonomous vehicles of any kind is paramount to their ability to navigate and localize in their environment. Typically, the sensors used on surface vessels for localization and mapping include LiDAR, IMU, GPS, and radar. Each of these has inherent weaknesses that must be accounted for in a robust system. The fusion of these sensors relies on accurate readings from each of the endpoints or an intelligent system capable of distinguishing erroneous sensor data. This paper discusses the quantified results of simulated perturbations on autonomous marine platforms and the effect of artificial noise models and malicious actions. The goal of this work is to lay the foundation for robust autonomous watercraft that account for limitations of sensors, environmental noise, and most importantly, nefarious attacks. This work also addresses the underlying algorithms of common mapping and localization packages in ROS. Each will be thoroughly analyzed for the purpose of identifying and exploiting their respective weaknesses, providing recommendations and development of improved algorithms.
Abstract:
Contact with human tutors is invaluable in helping individuals overcome obstacles, build skills, and gain confidence in the use of digital technology. Public libraries and other institutions can provide shared physical spaces to facilitate this kind of learning, but there are limitations: learners may have difficulty accessing these spaces, and the technology issues they face may be inextricably situated in their homes, offices, or other locations. The Illuminated Devices project seeks to complement in-person tutoring with online assistance that meets learners where they live and work. Each Illuminated Device is an iPad with a custom portal application that facilitates communication with a human tutor, providing a broad view of user activity across hardware and software applications, and conveying tutor input to learners in a way that minimizes distraction and maximizes flow. The Illuminated system also allows tutors to record learner progress and to confer with one another on technical issues.
Abstract:
Alzheimer’s Disease (AD) is the most common form of dementia, which is known for its
impacts on cognitive functions, especially memory. Recent studies have shown that
tasks developed to probe the ability to recalibrate the visuomotor systems are impaired
in the early stages of Alzheimer’s Disease (Tippett & Sergio, 2006) and are sensitive
to differences between older adults with low vs. high risk of developing Alzheimer’s
disease (Hawkins & Sergio, 2014). For example, the visuomotor rotation task which
requires participants to adapt to a visuomotor perturbation has been identified as
a means for assessing cognition (Buch, Young & Contreras-Vidal, 2003). Tippet
and Sergio (2006) developed a reverse visually guided reaching task (rVGR) in which
participants make a series of aimed movements toward a target. During the rVGR task,
the visual cursor moves in the opposite direction of the physical reach, forcing the
participant to correct their movements by reversing the reaching direction. Measures
of performance in this task, such as movement speed and inconsistency of movements,
have been shown to change in preclinical Alzheimer’s populations (Hawkins & Sergio,
2014). The current investigation seeks to further characterize rVGR performance differences
between younger adults, older adults, and individuals with early AD. For this purpose,
we are recruiting 20 younger adults, 20 healthy older adults, and 20 early AD patients.
We predict that participants with AD should perform similarly to the controls on a
VGR task but show significant deficits on the rVGR task. Additionally, correlations
will be examined between performance on a neuropsychological battery and the rVGR
task performance to test the prediction that
performance on the motor task are related to changes in cognition in AD. This work
may provide the foundation for using motor tasks as a diagnostic tool for cognitive
impairments in preclinical stages of MCI and Alzheimer’s Disease.
Abstract:
Accuracy and speed are pivotal when it comes to typing. However, in mixed reality space, users lose the tactile feedback that comes with a traditional keyboard. This makes it much more difficult for users to type effectively, particularly when using all 10 fingers. This study seeks to determine whether or not eye-tracking can be used to help fix the accuracy issues that accompany 10-finger typing in augmented reality. By avoiding dwell time in our model, we also hope to avoid increasing eyestrain for users. This study has recently started, and while some evidence has been gathered, more research is necessary to make progress in determining the most effective and comfortable method for users to type on virtual keyboards. Eventually, we hope to be able to use the information gathered in this study to demonstrate the utility (or lack thereof) in using eye-tracking when developing augmented reality user interfaces, as well as potentially begin development on predictive typing programs for virtual keyboards.
Abstract:
Spoken programming languages significantly differ from natural English due to the inherent variability in speech patterns among programmers and the wide range of programming constructs. In this work we employ Wav2Vec 2.0 to enhance the accuracy of transcribing spoken programming languages like Java. Adapting a model with just one hour of spoken programs that had prior exposure to a substantial amount of natural English-labeled data, we achieve a word error rate (WER) of 8.7\%, surpassing the high 28.4\% WER of a model trained solely on natural English. Decoding with a domain-specific N-gram model and subsequently rescoring the N-best list with a fine-tuned large language model tailored to the programming domain resulted in a WER of 5.5\% on our test set.
Abstract:
Concerns about safety and robustness of autonomous systems extend beyond ground vehicles to those that operate on a body of water, such as unmanned surface vessels (USVs). With an increasing utilization of USVs in marine applications, like maritime surveillance, environmental monitoring, and defense, ensuring a safe and effective operation of these robots has become extremely important. A variety of sensors are integrated into USVs to enable them to perceive and navigate through their surroundings safely and effectively. Here, we study the critical issue of sensor degradation and other physical perturbations. Specifically, we examine how the performance is impacted by sensor degradation (LiDAR, Camera, Radar, Sonar) in USVs under adverse environmental conditions such as rain, fog, water spray, and biofouling. We also analyze possible disruptions during a simulated operation and identify the limitations of ROS mapping algorithm when sensors are disrupted. Our findings are expected to provide much needed insights to design robust and trusted USVs.
Abstract:
Legged robots have gained significant attention in recent years because they can navigate
over rough terrain, climb stairs, and move over obstacles that would be difficult
or even impossible for wheeled robots. Specifically, the quadrupedal robots can have
a wide range of applications, including transportation tasks in industry, package
delivery, search and rescue, etc. Currently model predictive control (MPC) and reinforcement
learning (RL) are two major trends for controlling quadrupedal robots. However, selecting
the most suitable controller for a specific application can be a daunting task for
new researchers.
In this paper, we present a comparative study of MPC and RL controllers on the Unitree
Go1 quadrupedal robot, evaluating their performance under perturbation and model uncertainty.
Additionally, by assessing the controllers in different environments (flat/flat slippery/uneven
terrain), we aim to provide valuable insights to aid researchers in making informed
decisions when designing locomotion controllers.
Abstract:
In this research, we introduce a comprehensive deep-learning architecture for predicting
both O-GlcNAcylation and phosphorylation sites within protein sequences, as well as
their concurrent crosstalk sites. Our deep learning architecture involves an innovative
multi-windowing strategy, a groundbreaking approach that facilitates a detailed analysis
of protein sequences. Our model captures diverse and potentially biologically significant
patterns, offering a meticulous analysis of the protein sequences. We used a contextualized
protein language model (ProtT5) to embed protein sequences and formulated the process
of protein O-GlcNAcylation and phosphorylation site prediction as a multi-label classification
problem.
Abstract:
Accurate prediction of Post-Translational Modification (PTM) sites within proteins is crucial for advancing our understanding of cellular functions and streamlining drug development. Despite the advent of Protein Language Models (pLMs) in proteomics, the optimal methodology to incorporate these models in PTM predictions remains largely unexplored. This study introduces a novel framework leveraging pLMs to predict a vital PTM, Crotonylation (Kcr), by exploring various strategies to optimally use pLM embeddings to represent the site-of-interest. Our approach not only focuses on deriving the most coherent representation from the embeddings but also ventures to interpret the influence of these embeddings on the prediction outcomes.The proposed method has been rigorously evaluated against two benchmark datasets, exhibiting substantial improvements over existing state-of-the-art predictors. This work highlights the importance and efficacy of employing and interpreting pLMs for predicting PTMs, providing a valuable contribution to the field.
EAbstract:
With increasing development of connected and autonomous vehicles, the risk of cyber
threats on them is also increasing. Compared to traditional computer systems, a CAV
attack is more critical, as it does not only threaten confidential data or system
access, but may endanger the lives of drivers and passengers. To control a vehicle,
the attacker may inject malicious control messages into the vehicle's controller area
network. To make this attack persistent, the most reliable method is to inject malicious
code into an electronic control unit's firmware. This allows the attacker to inject
CAN messages and exhibit significant control over the vehicle, posing a safety threat
to anyone in proximity.
In this work, we have designed a defensive framework which allows restoring compromised
ECU firmware in real time. Our framework combines existing intrusion detection methods
with a firmware recovery mechanism using trusted hardware components equipped in ECUs.
Especially, the firmware restoration utilizes the existing FTL in the flash storage
device. This process is highly efficient by minimizing the necessary restored information.
Further, the recovery is managed via a trusted application running in TrustZone secure
world. Both the FTL and TrustZone are secure when the ECU firmware is compromised.
Steganography is used to hide communications during recovery. We have implemented
and evaluated our prototype implementation in a testbed simulating the real-world
in-vehicle scenario.
Abstract:
Image segmentation of dual-energy X-ray absorptiometry (DXA) images is an important
stage in the field of medical imaging, particularly in the measurement of bone density
and body composition, as well as hip fractures. DXA scans are divided into different
zones of interest, which commonly comprise bone, soft tissue, and air. The major goal
is to accurately segregate these zones, as this is required for estimating bone mineral
density, diagnosing probable bone fractures, and assessing body fat distribution.
DXA scans are frequently used in the context of hip fracture diagnosis not only to
examine bone density but also to identify possible fractures or anomalies in the hip
region. The accurate segmentation of the hip area within the DXA image is critical
because it allows healthcare practitioners to concentrate their analysis particularly
on this region. The hip joint and surrounding bone components are isolated from the
rest of the image during DXA image segmentation for hip fractures. This segmentation
procedure aids in the detection of possible fractures, which might appear as disruptions
or irregularities in the bone shape. The early diagnosis of these fractures is critical
for commencing appropriate medical treatments, such as surgery or other procedures,
to stabilize the hip and avoid future difficulties. Furthermore, DXA image segmentation
in hip fracture evaluation can provide useful information on the amount and location
of the fracture within the hip joint. This data supports orthopedic surgeons in planning
surgical treatments and selecting the best fracture repair options. Overall, DXA image
segmentation for hip fractures is an important part of orthopedic therapy. It helps
healthcare providers to identify hip fractures quickly and correctly, resulting in
prompt interventions that can greatly enhance patient outcomes and quality of life.