Research Projects

Holistic Innovation in Additive Manufacturing 2.0 (HI-AM 2.0): Capitalizing on Prior Achievements and Exploring New Frontiers in Directed Energy Deposition Processes

NSERC Alliance
co-Primary Investigator and Leading Researcher
The Pan-Canadian NSERC Strategic Network, "Holistic Innovation in Additive Manufacturing (HI-AM)," was established in 2017 to coordinate and accelerate national research efforts in additive manufacturing (AM). Despite the challenges posed by the pandemic, HI-AM 1.0 achieved exceptional outcomes. For example, the initial goal of training 85 highly qualified personnel (HQP) was surpassed when 131 HQP were successfully trained. The network's principal investigators and HQP generated 405 publications, filed ten patents and invention disclosures, and registered two standards development working groups with ASTM/ISO. One of the standards drafts is currently undergoing an international ballot. Building on the success of HI-AM 1.0, the team will leverage the established network through multiple independent NSERC Alliance proposals. The current proposal focuses on Directed Energy Deposition (DED) Processes, one of the seven classes of AM. It aims to address two primary research thrusts: 1) Model Driven Workflow for In-Situ Monitoring and Quality Assurance in Robotics DED Platforms used for Large-scale AM, and 2) DED Process Development for Advanced Materials, Novel Products, Large Scale Manufacturing, and Parts Refurbishment/Repair. The multidisciplinary HI-AM 2.0-DED team, comprising seven universities and eleven companies, has made significant advancements in intelligent control technologies, innovative DED processes, and new materials. These achievements provide a foundation for overcoming obstacles in commercial AM applications within the framework of HI-AM 2.0 and position Canada as an important player in the global AM supply chain. The team's industry partners span sectors such as aerospace, automotive, defense, energy, natural resources, and tooling, underscoring the potential impact of their research. This project aims to sustain the momentum of AM initiatives in Canada, given the critical timing amidst global geopolitical and supply chain challenges. The proposed Alliance seeks to establish partnerships, develop intellectual property, and train HQP essential for Canada's competitiveness in the emerging field of AM.

Canadian Alliance in Cold Spray Technology (CACST)

Canadian Foundation for Innovation
co-Primary Investigator and Leading Researcher
Cold spray (CS) is a fast-growing green manufacturing paradigm with unprecedentedly high throughput, poised to become a leading manufacturing technology of the future. CS uses environmentally friendly air, nitrogen or helium gas to accelerate micrometer-sized particles to supersonic velocities, enabling rapid layer- by-layer deposition at temperatures far lower than the materials' melting point. The technology has opened up new possibilities for deposition of metallic, ceramic, metal/ceramic composite materials at printing rates superior to any other additive technology. Canada is positioned to lead CS R&D internationally, where innovation in the spray equipment, application diversification and fast-manufacturing lifecycle management will play disruptive roles. The requested Canadian Alliance in Cold Spray Technology (CACST) facilities will connect and accelerate world-class CS research underway at the Universities of Waterloo, Ottawa, Western, New Brunswick, Sherbrooke, Toronto, and Windsor. It will enable a holistic approach to CS technology, providing cutting-edge infrastructure for novel CS-specific powder synthesis and fabrication, process integrations, an in-situ preconditioning open- source CS coatings platform, virtual testing, and intelligent CS additive manufacturing for tailored products and time-sensitive applications

Remanufacturing - A manufacturing paradigm shift for deep decarbonization in a sustainable economy

NSERC Alliance
co-Primary Investigator and Leading Researcher
Every stage of manufacturing, from raw materials extraction, refinement, and processing to part manufacturing contributes immensely to greenhouse gas (GHG) emissions. The proposed project will connect expertise across mechanical and environmental engineering, computer science, supply chain and logistics, and climate change governance and sustainability to drive advances in the technologies needed for part assessment and remanufacturing, and generate data needed to build understanding and predictive models that capture the benefits of a remanufacturing-based decarbonization strategy and how it will impact policy. The proposed strategy will significantly reduce GHG, provide exceptional economic opportunities for SME, and generate required data for decision makers to propose policies based on remanufacturing.

Workload-driven query planning and optimization using machine learning

Mitacs Accelerate
Primary Investigator and Leading Researcher
The applications of Machine Learning (ML) and Deep Learning (DL) have proliferated in most aspects of traditional computer science. Data management discipline is no exception in this regard. Rule-based modules are being replaced by ML/DL-based counterparts that effectively ‘mine the rules’ from experience. Approaches that rely on crude statistics are rapidly being outdated by the ones that ‘learn’ the functional dependencies, correlations, and skewness from the underlying data. In this project we develop novel techniques that we will integrate towards the creation of a learned query optimizer. The research will be conducted with the research team of IBM that develops Db2, the well- known relational DBMS of IBM. The goal is to integrate the produced learned optimizer in Db2.

Workload-driven query planning and optimization using machine learning

IBM CAS
Primary Investigator and Leading Researcher
We target to find ways to exploit information available from the underlying data and the workload as well as the optimizer and runtime feedback, in order to continuously learn the best strategies for enumerating join orders and estimating execution costs with improved accuracy which will lead to faster query execution. We initially aim at investigating the properties related to the partial or complete join graphs that correlate with the associated cost, which in turn can help to perform early pruning in the current join ordering process. This information can eventually be used for designing and developing a machine learning model that learns the patterns associated with higher execution costs.

Creation of an Assignment System Employing Data Analytics for The Improvement of The Document Translation Process of The Translation Bureau

Federal Bureau of Translation, Canada
Primary Investigator and Leading Researcher
The Translation Bureau stores the metadata of the documents that it translates, collected over more than 12 years. This project will perform analytics on the metadata that could give us useful and valuable insights on the data and reveal past trends and situations of assignment of the documents to translators. The information gained through exploratory data analytics will be employed in order to create a system of translation for the documents at Translation Bureau, which is more efficient and productive than the existing one.

Managing the Performance of Big Data Analytics on Heterogeneous Infrastructures

Natural Sciences and Engineering Research Council of Canada (NSERC) Discovery Grant
Primary Investigator and Leading Researcher
For a range of major scientific computing challenges that span fundamental and applied science, the deployment of Big Data Applications (BDAs) on a large-scale system, such as an internal or external cloud, a private cluster or even distributed public volunteer resources (crowd computing) needs to be offered with guarantees of predictable performance and utilization cost. Currently, however, this is not possible, because scientific communities lack the technology, both at the level of modeling and analytics, that identifies the key characteristics of BDAs and their impact on performance. There is also little data or simulations available that address the role of the system operation and infrastructure in defining overall performance. This project will fill this gap by producing a deeper understanding of how to optimize the deployment of BDAs on hybrid large-scale infrastructures.

IoT2Edge: Allocating selected IoT processing and storage activities to edge nodes to optimize performance and resource consumption ensuring interoperability

Horizon 2020, Scientific Excellence (RAWFIE-OC3-SCI)
Primary Investigator and Leading Researcher
IoT2Edge aims to promote scientific excellence in the areas of edge computing and semantic interoperability in IoT environments via experimentation based on the RAWFIE federated testbed infrastructure. The IoT2Edge semantic interoperability mechanisms will be based on open specifications/standards and will be pluggable on top of the RAWFIE platform. The edge computing modules will enable dynamic offloading of resource intensive processes to achieve edge/fog/cloud node resource usage optimization. To establish the above, IoT2Edge will develop new modules in the form of RAWFIE experiment’s supportive software (SSW). These will be evaluated via demanding experiments driven by an IoT-enabled emergency detection use case, while the generated datasets will be carefully handled and made available to the research community. It is envisioned that the developed SSW will extend the RAWFIE architecture by enhancing its openness and will be made available for other types of experiments. NTUA has wide expertise on all related technical domains (i.e., IoT, semantics, optimization, resource allocation, FIRE+) having a long record of relevant successful projects.

Deployment Optimization for Big Data Applications on Hybrid Large-Scale Computing Infrastructures

Swiss National Science Foundation (SNSF) PRN Big Data Project
Applicant and Leading Researcher
For a range of major scientific computing challenges that span fundamental and applied science, the deployment of Big Data Applications (BDAs) on a large-scale system, such as an internal or external cloud, a private cluster or even distributed public volunteer resources (crowd computing) needs to be offered with guarantees of predictable performance and utilization cost. Currently, however, this is not possible, because scientific communities lack the technology, both at the level of modeling and analytics, that identifies the key characteristics of BDAs and their impact on performance. There is also little data or simulations available that address the role of the system operation and infrastructure in defining overall performance. This project will fill this gap by producing a deeper understanding of how to optimize the deployment of BDAs on hybrid large-scale infrastructures. Using a novel combination of Big Data analytics and modeling results, our aim is to improve the performance of three major scientific infrastructures: the Worldwide LHC Computing Grid (WLCG) at CERN in high-energy physics, Vital-IT (part of the Swiss Institute for Bioinformatics (SIB)) in bioinformatics and Baobab, the high performance computing cluster of the University of Geneva.

ASAP: An Adaptive, highly Scalable Analytics Platform (2014-2017)

FP7-ICT-2013-11, ICT-2013.4.2 (Scalable data analytics), Specific Targeted Research Projects (STReP)
Applicant from the University of Geneva and Leading Researcher
Data analytics tools have become essential for harnessing the power of data deluge. Current technologies are restrictive, as their efficacy is usually bound to a single data and compute model, often depending on proprietary systems. This project proposes a unified, open-source execution framework for scalable big data analytics. The project makes the following innovative contributions: (a) A general-purpose task-parallel programming model. (b) A modeling framework that constantly evaluates the cost, quality and performance of data and computational resources in order to decide on the most advantageous store, indexing and execution pattern available. (c) A unique adaptation methodology that will enable the analytics expert to amend the task she has submitted at an initial or later stage. (d) A state-of-the-art visualization engine that will enable the analytics expert to obtain accurate, intuitive results of the analytics tasks she has initiated in real-time.

An open market for cloud data services (2013-2014)

Swiss National Science Foundation (SNSF) Project
Co-applicant from the University of Geneva and Leading Researcher
This project aims to fill the gap between providers and consumers of data services that exists in today’s cloud business, by not only solving the above two issues, but also offering an all-inclusive solution for the provision of efficient and appropriate data services encapsulated in SLAs capable to express these qualities. We propose the exchange of cloud data services in an open market, where cloud providers and their customers can advertise the offered and requested data services in a free manner and make contracts for service provisioning.

Data services in cloud federations and big data analytics (2012-2015)

University of Geneva
Applicant and Leading Researcher
The paradigm of cloud computing is rapidly gaining ground as an alternative to traditional information technology, since it combines utility and grid computing Very recently, research in the data management field focused on the provision of cloud data services , i.e. transparent management of data residing in the cloud, taking advantage of the cloud infrastructure elasticity for the maximization of performance. Data management mainly includes data storage and maintenance, as well as data accessing. Towards this end, we take a step further in cloud computing research and explore the possibilities of offering data services in federations of clouds. Our main interest lies in the management of big analytical data taking advantage of the possibilities offered by a cloud federation and exploring the limits of data and workload execution and migration.

Efficient data management for scientific applications (2008-2010)

École Polytechnique Fédérale de Lausanne
Postdoctoral Researcher
Several scientific applications are constrained by the complexity of manipulating massive datasets. Observation-based sciences, such as astronomy, face immense data placement problems, whereas simulation-based sciences, such as earthquake modelling, must deal with complexity. Efficient data management by means of automated data placement and computational support can push the frontiers of scientists' ability to explore and understand massive scientific datasets.

Hyperion Project (2002-2010)

University of Toronto
Graduate and Post-Graduate Researcher
The Hyperion Project focuses on research about the principles and control of information sharing in a P2P database system. In such a system data are structured and queries complex.
More information on the Hyperion Project can be found at the site: http://www.cs.toronto.edu/db/hyperion
06/04-07/04: Academic visit to the University of Toronto for work in the Hyperion Project.

Data management for location-based services of mobile nodes (2005-2008)

ΠΕΝΕΔ 2003 03ΕΔ291 Ministry of Development - General Secretariat of Research and Technology, Greece
Co-applicant and Leading Researcher as a PhD student
The project aims at the study and development of techniques for the efficient information management that depend on the position of mobile nodes, like humans or vehicles (location-based services). The techniques guarantee the management of enormous volumes of spatio-temporal and topic-based data that are collected and shared through networks, as well as the online service of multiple user requests.

Management of the semantic web: models and algorithms for the processing of semantic content (2004-2006)

Pythagoras: Reinforcement of research teams in universities - Greek Ministry of Education
PhD student
The semantic web suffers from the lack of structure, semantics and meta-information. These are the obstacles of efficient web information search. Current solutions use hierarchicalschemas that tag the available data semantically . The goal of this project is to model and manamge hierarchical schema, offer query processing on such schemas and guarantee the autonomy and distributedness of them.



Contact Info

Vasiliki (Verena) Kantere

School of Electrical Engineering and Computer Science (EECS),
Office room SITE 5 060,
800 King Edward Ave
Ottawa ON Canada
K1N 6N5
email:vkantere@uottawa.ca,
phone: +1 613 562 5700 ext. 6708