Personal page: Cristian Axenie

Bio

After earning a PhD in Neuroscience and Robotics from TUM in 2016, I've spent one more year with the TUM Center of Competence Neuroengineering before joining Huawei Research Center in Munich. Since 2017 I am Senior AI and ML Research Engineer with Huawei's largest research center outside China. At the same time, I lead the Audi Konfuzius-Institut Ingolstadt Lab, a Sino-German research initiative focused on combining modern AI and VR technology for applications ranging from sports technology to biomedical engineering and medical rehabilitation. Each term, I teach AI and ML for undergrads in TH Ingolstadt.

A recent curriculum vitae:

(Extended Curriculum Vitae)

Selected Publications


Research

GLUECK: Growth pattern Learning for Unsupervised Extraction of Cancer Kinetics

Neoplastic processes are described by complex and heterogeneous dynamics. The interaction of neoplastic cells with their environment describes tumor growth and is critical for the initiation of cancer invasion. Despite the large spectrum of tumor growth models, there is no clear guidance on how to choose the most appropriate model for a particular cancer and how this will impact its subsequent use in therapy planning. Such models need parametrization that is dependent on tumor biology and hardly generalize to other tumor types and their variability. Moreover, the datasets are small in size due to the limited or expensive measurement methods. Alleviating the limitations that incomplete biological descriptions, the diversity of tumor types, and the small size of the data bring to mechanistic models, we introduce Growth pattern Learning for Unsupervised Extraction of Cancer Kinetics (GLUECK) a novel, data-driven model based on a neural network capable of unsupervised learning of cancer growth curves. Employing mechanisms of competition, cooperation, and correlation in neural networks, GLUECK learns the temporal evolution of the input data along with the underlying distribution of the input space. We demonstrate the superior accuracy of GLUECK, against four typically used tumor growth models, in extracting growth curves from a set of four clinical tumor datasets. Our experiments show that, without any modification, GLUECK can learn the underlying growth curves being versatile between and within tumor types. Preprint https://www.biorxiv.org/content/10.1101/2020.06.13.140715v1  Code https://gitlab.com/akii-microlab/ecml-2020-glueck-codebase

PRINCESS: Prediction of Individual Breast Cancer Evolution to Surgical Size

Modelling surgical size is not inherently meant to replicate the tumor's exact form and proportions, but instead to elucidate the degree of the tissue volume that may be surgically removed in terms of improving patient survival and minimize the risk that a second or third operation will be needed to eliminate all malignant cells entirely. Given the broad range of models of tumor growth, there is no specific rule of thumb about how to select the most suitable model for a particular breast cancer type and whether that would influence its subsequent application in surgery planning. Typically, these models require tumor biology-dependent parametrization, which hardly generalizes to cope with tumor heterogeneity. In addition, the datasets are limited in size owing to the restricted or expensive methods of measurement. We address the shortcomings that incomplete biological specifications, the variety of tumor types and the limited size of the data bring to existing mechanistic tumor growth models and introduce a Machine Learning model for the PRediction of INdividual breast Cancer Evolution to Surgical Size (PRINCESS). This is a data-driven model based on neural networks capable of unsupervised learning of cancer growth curves. PRINCESS learns the temporal evolution of the tumor along with the underlying distribution of the measurement space. We demonstrate the superior accuracy of PRINCESS, against four typically used tumor growth models, in extracting tumor growth curves from a set of nine clinical breast cancer datasets. Our experiments show that, PRINCESS can learn the underlying growth curves being versatile between breast cancer types. Preprint https://www.biorxiv.org/content/10.1101/2020.06.13.150136v1  Code https://gitlab.com/akii-microlab/cbms2020 

TUCANN: TUmor Characterization using Artificial Neural Networks

Despite the variety of imaging, genetic and histopathological data used to assess tumors, there is still an unmet need for patient-specific tumor growth profile extraction and tumor volume prediction, for use in surgery planning. Models of tumor growth predict tumor size based on measurements made in histological images of individual patients’ tumors compared to diagnostic imaging. Typically, such models require tumor biology-dependent parametrization, which hardly generalizes to cope with tumor variability among patients. In addition, the histopathology specimens datasets are limited in size, owing to the restricted or single-time measurements. In this work, we address the shortcomings that incomplete biological specifications, the inter-patient variability of tumors, and the limited size of the data bring to mechanistic tumor growth models and introduce a machine learning model capable of characterizing a tumor, namely its growth pattern, phenotypical transitions, and volume. The model learns without supervision, from different types of breast cancer data the underlying mathematical relations describing tumor growth curves more accurate than three state-of-the-art models on three publicly available clinical breast cancer datasets, being versatile among breast cancer types. Moreover, the model can also, without modification, learn the mathematical relations among, for instance, histopathological and morphological parameters of the tumor and together with the growth curve capture the (phenotypical) growth transitions of the tumor from a small amount of data. Finally, given the tumor growth curve and its transitions, our model can learn the relation among tumor proliferation-to-apoptosis ratio, tumor radius, and tumor nutrient diffusion length to estimate tumor volume, which can be readily incorporated within current clinical practice, for surgery planning. We demonstrate the broad capabilities of our model through a series of experiments on publicly available clinical datasets. Preprint https://www.biorxiv.org/content/10.1101/2020.06.08.140723v1   Code https://gitlab.com/akii-microlab/icann-2020-bio 

CHIMERA: Combining Mechanistic Models and Machine Learning for Personalized Chemotherapy and Surgery Sequencing in Breast Cancer

Mathematical and computational oncology has increased the pace of cancer research towards the advancement of personalized therapy. Serving the pressing need to exploit the large amounts of currently underutilized data, such approaches bring a significant clinical advantage in tailoring the therapy. CHIMERA is a novel system that combines mechanistic modelling and machine learning for personalized chemotherapy and surgery sequencing in breast cancer. It optimizes decision-making in personalized breast cancer therapy by connecting tumor growth behaviour and chemotherapy effects through predictive modelling and learning. We demonstrate the capabilities of CHIMERA in learning simultaneously the tumor growth patterns, across several types of breast cancer, and the pharmacokinetics of a typical breast cancer chemotoxic drug. The learnt functions are subsequently used to predict how to sequence the intervention. We demonstrate the versatility of CHIMERA in learning from tumor growth and pharmacokinetics data to provide robust predictions under two, typically used, chemotherapy protocol hypotheses. Preprint https://www.biorxiv.org/content/10.1101/2020.06.08.140756v1  Code https://gitlab.com/akii-microlab/bibe2020 

VIRTOOAIR: VIrtual Reality TOOlbox for Avatar Intelligent Reconstruction

The project focuses on designing and developing a Deep Learning framework for improved avatar representations in immersive collaborative virtual environments. The proposed infrastructure will be built on a modular architecture tackling: a) a predictive avatar tracking module; b) an inverse kinematic learning module; c) an efficient data representation and compression module.

In order to perform precise predictive tracking of the body without using a camera motion capture system we need proper calibration data of the 18 degrees-of-freedom provided by the VR devices, namely the HMD and the two hand controllers. Such a calibration procedure involves the mathematical modelling of a complex geometrical setup. As a first component of VIRTOOAIR we propose a novel position calibration method using deep artificial neural networks, as depicted in the next figure.

The second component in the VIRTOOAIR toolbox is the inverse kinematics learner, generically described in the following diagram. The problem of learning of inverse kinematics in VR avatars interactions is useful when the kinematics of the head, body or controllers are not accurately available, when Cartesian information is not available from camera coordinates, or when the computation complexity of analytical solutions becomes too high.

Data and bandwidth constraints are substantial in remote VR environments. However, such problems can be solved through compression techniques and network topologies advances. VIRTOOAIR proposes to tackle this problem through its third component, a neural network data representation (compression and reconstruction) module, described in the following diagram.

PRELIMINARY RESULTS

VR MOTION RECONSTRUCTION BASED ON A VR TRACKING SYSTEM AND A SINGLE RGB CAMERA

Our preliminary results demonstrate the advantages of our system’s avatar pose reconstruction. This is mainly determined by the use of a powerful learning system, which offers significantly better results than existing heuristic solutions for inverse kinematics. Our system supports the paradigm shift towards learning systems capable to track full-body avatars inside Virtual Reality without the need of expensive external tracking hardware. The following figure shows preliminary results of our proposed reconstruction system.

For the upper body reconstruction, the semantically higher VR tracking system data is used. The lower body parts are reconstructed using state-of-the-art deep learning mechanism which allows pose recovery in an end-to-end manner. The system can be split in five different parts: tracking data acquisition, inverse kinematic regression, image processing, end-to-end recovery and visualization.

Publications

Peer reviewed papers

A. Becher, C. Axenie, T. Grauschopf, VIRTOOAIR: VIrtual Reality TOOlbox for Avatar Intelligent Reconstruction,  2018 IEEE International Symposium on Mixed and Augmented Reality (ISMAR2018).

Online Distributed Machine Learning on Streams

Streams are sequences of events (i.e. tuples containing various types of data) that are generated by various sources (e.g. sensors, machines or humans) in a chronologically ordered fashion. The stream processing paradigm involves applying business functions over the events in the stream. A typical approach to stream processing assumes accumulating such events within certain boundaries at a given time and applying functions on the resulting collection. Such transient event collections are termed windows.

Stream processing paradigm simplifies parallel software and hardware by restricting the parallel computation that can be performed. Given a sequence of data (a stream), a series of operations (functions) is applied to each element in the stream, in a declarative way, we specify what we want to achieve and not how.

Big Data Stream Online Learning is more challenging than batch or offline learning, since the data may not preserve the same distribution over the lifetime of the stream. Moreover, each example coming in a stream can only be processed once, or needs to be summarized with a small memory footprint, and the learning algorithms must be efficient.

The need for Online Machine Learning

How to compute the entropy of a collection of infinite data, where the domain of the variables can be huge and the number of classes of objects is not known a priori? How to maintain the k-most frequent items in a retail data warehouse with 3 TB of data, 100s of GB of new sales records updated daily with 1000000s different items? What becomes of statistical computations when the learner can only afford one pass through each data sample because of time and memory constraints; when the learner has to decide on-the-fly what is relevant and process it and what is redundant and could be discarded?

This project focuses on developing new Online Machine Learning algorithms that run distributedly on clusters using a stream processor. The characteristics of the streaming data entail a new vision due to the fact that: - Data are made available through unlimited streams that continuously flow, eventually at high speed, over time; - The underlying regularities may evolve over time rather than being stationary; - The data can no longer be considered as independent and identically distributed; - The data are now often spatially as well as time situated.

Publications

Peer reviewed papers

C. Axenie et al., STARLORD: Sliding window Temporal Accumulate-Retract Learning for Online Reasoning on Datastreams, 2018 IEEE International Conference on Machine Learning and Applications (ICMLA2018). (submitted)

D. Foroni, C. Axenie et al., Moira: A Goal-Oriented Incremental Machine Learning Approach to Dynamic Resource Cost Estimation in Distributed Stream Processing Systems, International Workshop on Real-Time BI and Analytics, VLDB 2018.

Deep Learning for Autonomous Systems

Scope

The current research project aims at exploring object detection algorithms using a novel neuromorphic vision sensor with deep learning neural networks for autonomous electric cars. More precisely, this work will be conducted with the Schanzer Racing Electric (SRE) team at the Technical University of Ingolstadt. SRE is a team of around 80 students, that design, develop and manufacture an all-electric racing car every year to compete in Formula Student Electric. The use of neuromorphic vision sensors together with deep neural networks for object detection is the innovation that the project proposes. The project was supported by nVidia through a GPU Research Grant.

Context

Autonomous driving is a highly discussed topic, but in order to operate autonomously, the car needs to sense its environment. Vision provides the most informative and structured modality capable of grounding perception in autonomous vehicles. In the last decades, classical computer vision algorithms were used to not only locate relevant objects in the scene, but also to classify them. But in recent years, major improvements were reached when first deep learning object detectors were developed. In general, such object detectors use a convolutional feature extractor as their basis. Due to the multitude of feature extraction algorithms, there are numerous combinations of feature extractor and object detectors, which influences a system designer’s approach. One of the most interesting niches is the analysis of traffic scenarios. Such scenarios require fast computation of features and classification for decision making.

Our approach to object detection, recognition and decision making aims at “going away from frames”. Instead of using traditional RGB cameras we aim at utilizing dynamic vision sensors (DVS - https://inivation.com/dvs/). Dynamic vision sensors mimic basic characteristics of human visual processing (i.e. neuromorphic vision) and have created a new paradigm in vision research. Similar to photoreceptors in the human retina, a single DVS pixel (receptor) can generate events in response to a change of detected illumination. Events encode dynamic features of the scene, e.g. moving objects, using a spatio-temporal set of events. Since DVS sensors drastically reduce redundant pixels (e.g. static background features) and encode objects in a frame-less fashion with high temporal resolution (about 1 μs), it is well suited for fast motion analyses and tracking. DVS are capable of operating in uncontrolled environments with varying lighting conditions because of their high dynamic range of operation (120 dB).

As traffic situations yield fast detection and precise estimation, we plan to use such an event-based visual representation together with two convolutional networks proved to be suitable for the task. The two algorithms we plan to explore are the Single Shot Multibox Detector (SSMD), which got popular for its fast computation speed, and the Faster Region-Based Convolutional Neural Network (Faster RCNN), which is known to be a slow but performant detector.

Motivation

The project tries to set a fundamental exploratory work, both in terms of sensory data for environment perception and also neural network architectures for the considered task. The experiments aim at evaluating also the points where better accuracy can only be obtained by sacrificing computation time. The two architecture we chose are opposite. The first one is the SSMD network with Inception V2 as a feature extractor. This network has a low computation time with acceptable accuracy. The correspondent network is the Faster RCNN with ResNet-101 as its feature extractor. Its accuracy is one of the highest, whereas the computation time is relatively slow. Whereas features are common for frame-based computer vision problems, no solution exists yet to determine unique features in event streams. This is the first step towards more complex algorithms operating on the sparse event-stream. The possibility to create unique filter responses gives rise to the notion of temporal features. This opens the exploratory work we envision in this project, to investigate the use of SSMD and Faster RCNN networks using event-based input in a natively parallel processing pipeline.

Preliminary results

The initial step was carried in training a single shot detector (mobilenet) for the cone detection, a stage in the preparation the Formula Electric competition. Experiments were carried on an nVidia GTX1080Ti GPU using TensorRT. The performance evaluation is shown in the following diagrams.

Adaptive Neuromorphic Sensorimotor Control

Efficient sensorimotor processing is inherently driven by physical real-world constraints that an acting agent faces in its environment. Sensory streams contain certain statistical dependencies determined by the structure of the world, which impose constraints on a system’s sensorimotor affordances.

This limits the number of possible sensory information patterns and plausible motor actions. Learning mechanisms allow the system to extract the underlying correlations in sensorimotor streams.

This research direction focused on the exploration of sensorimotor learning paradigms for embedding adaptive behaviors in robotic system and demonstrate flexible control systems using neuromorphic hardware and neural-based adaptive control. I employed large-scale neural networks for gathering and processing complex sensory information, learning sensorimotor contingencies, and providing adaptive responses.

To investigate the properties of such systems, I developed flexible embodied robot platforms and integrate them within a rich tool suite for specifying neural algorithms that can be implemented in embedded neuromorphic hardware.

The mobile manipulator I developed at NST for adaptive sensorimotor systems consists of an omni-directional (holonomic) mobile manipulation platform with embedded low-level motor control and multimodal sensors.

The on-board micro-controller receives desired commands via WiFi and continuously adapts the platform's velocity controller. The robot’s integrated sensors include wheel encoders for estimating odometry, a 9DoF inertial measurement unit, a proximity bump-sensor ring and three event-based embedded dynamic vision sensors (eDVS) for visual input.

The mobile platform carries an optional 6 axis robotic arm with a reach of >40cm. This robotic arm is composed of a set of links connected together by revolute joints and allows lifting objects of up to 800 grams. The mobile platform contains an on-board battery of 360 Wh, which allows autonomous operation for well above 5h.

Publications

Peer reviewed journal papers

F. Mirus, C. Axenie, T. C. Stewart, J. Conradt, Neuromorphic Sensorimotor Adaptation for Robotic Mobile Manipulation: From Sensing to Behaviour, Cognitive Systems Research, 2018.

I. Sugiarto, C. Axenie, J. Conradt, FPGA-based Hardware Accelerator for an Embedded Factor Graph with Configurable Optimization, Journal of Circuits, Systems and Computers, 2018.

Synthesis of Distributed Cognitive Systems - Learning and Development of Multisensory Integration

My research interest is in developing sensor fusion mechanisms for robotic applications. In order to extend the interacting areas framework a second direction in my research focuses on learning and development mechanisms.

Human perception improves through exposure to the environment. A wealth of sensory streams which provide a rich experience continuously refine the internal representations of the environment and own state. Furthermore, these representations determine more precise motor planning.

An essential component in motor planning and navigation, in both real and artificial systems, is egomotion estimation. Given the multimodal nature of the sensory cues, learning crossmodal correlations improves the precision and flexibility of motion estimates.

During development, the biological nervous system must constantly combine various sources of information and moreover track and anticipate changes in one or more of the cues. Furthermore, the adaptive development of the functional organisation of the cortical areas seems to depend strongly on the available sensory inputs, which gradually sharpen their response, given the constraints imposed by the cross-sensory relations.

Learning processes which take place during the development of a biological nervous system enable it to extract mappings between external stimuli and its internal state. Precise egomotion estimation is essential to keep these external and internal cues coherent given the rich multisensory environment. In this work we present a learning model which, given various sensory inputs, converges to a state providing a coherent representation of the sensory space and the cross-sensory relations.

Figure 1

The model is based on Self-Organizing-Maps and Hebbian learning (see Figure 1) using sparse population coded representations of sensory data. The SOM is used to represent the sensory data, while the Hebbian linkage extracts the coactivation pattern given the input modalitites eliciting peaks of activity in the neural populations. The model was able to learn the intrinsic sensory data statistics without any prior knowledge (see Figure 2).

Figure 2

The developed model, implemented for 3D egomotion estimation on a quadrotor, provides precise estimates for roll, pitch and yaw angles (setup depicted in Figure 3a, b).

Figure 3

Given relatively complex and multimodal scenarios in which robotic systems operate, with noisy and partially observable environment features, the capability to precisely and timely extract estimates of egomotion critically influences the set of possible actions.

Utilising simple and computationally effective mechanisms, the proposed model is able to learn the intrinsic correlational structure of sensory data and provide more precise estimates of egomotion (see Figure 4a, b).

Figure 4

Moreover, by learning the sensory data statistics and distribution, the model is able to judiciously allocate resources for efficient representation and computation without any prior assumptions and simplifications. Alleviating the need for tedious design and parametrisation, it provides a flexible and robust approach to multisensory fusion, making it a promising candidate for robotic applications.

Publications

Peer reviewed journal papers

C. Axenie, C. Richter, J. Conradt, A Self-Synthesis Approach to Perceptual Learning for Multisensory Fusion in Robotics, Sensors Journal, 2016 (PDF)

Peer reviewed conference papers

C. Axenie, J. Conradt, Learning Sensory Correlations for 3D Egomotion Estimation, International Conf. on Biomimetics and Biohybrid Systems, 2015

Poster presentations

C. Axenie, J. Conradt, A model for development and emergence in multisensory integration, Bernstein Conference on Computational Neuroscience, Göttingen, 2014. (PDF)


Synthesis of Distributed Cognitive Systems - Interacting Cortical Maps for Environmental Interpretation

The core focus of my research interest is in developing sensor fusion mechanisms for robotic applications. These mechanisms enable a robot to obtain a consistent and global percept of its environment using available sensors by learning correlations between them in a distributed processing scheme inspired by cortical mechanisms.

Environmental interaction is a significant aspect in the life of every physical entity, which allows the updating of its internal state and acquiring new behaviors. Such interaction is performed by repeated iterations of a perception-cognition-action cycle, in which the entity acquires and memorizes relevant information from the noisily and partially observable environment, to develop a set of applicable behaviors (see Figure 5).

Figure 5

This recently started research project is in the area of mobile robotics; and more specifically in explicit methods applicable for acquiring and maintaining such environmental representations. State-of-the-art implementations build upon probabilistic reasoning algorithms, which typically aim at optimal solutions with the cost of high processing requirements.

In this project I am developing an alternative, neurobiologically inspired method for real-time interpretation of sensory stimuli in mobile robotic systems: a distributed networked system with inter-merged information storage and processing that allows efficient parallel reasoning. This networked architecture will be comprised of interconnected heterogeneous software units, each encoding a different feature about the state of the environment that is represented by a local representation (see Figure 6).

Figure 6

Such extracted pieces of environmental knowledge interact by mutual influence to ensure overall system coherence. A sample instantiation of the developed system focuses on mobile robot heading estimation (see Figure 7). In order to obtain a robust and unambiguous description of robot’s current orientation within its environment inertial, proprioceptive and visual cues are fused (see image). Given available sensory data, the network relaxes to a globally consistent estimate of the robot's heading angle and position.

Figure 7

Publications

Journal articles

I. Sugiarto, C. Axenie, J. Conradt, From Adaptive Reasoning to Cognitive Factory: Bringing Cognitive Intelligence to Manufacturing Technology, International Journal of Industrial Research and Applied Engineering (2016) (PDF)

I. Susnea, C. Axenie, Cognitive Maps for Indirect Coordination of Intelligent Agents, Studies in Informatics and Control Vol. 24 (2015) (PDF)

C. Axenie, J. Conradt, Cortically inspired sensor fusion network for mobile robot egomotion estimation, Robotics and Autonomous Systems (2014) (PDF)

Peer reviewed conference papers

C. Axenie, J. Conradt, Cortically Inspired Sensor Fusion Network for Mobile Robot Heading Estimation, International Conf. on Artificial Neural Networks (ICANN), 2013, pp. 240-47. (PDF)

Poster presentations

C. Axenie, M. Firouzi, M.Mulas, J. Conradt, Multimodal sensor fusion network for mobile robot egomotion estimation, Bernstein Conference on Computational Neuroscience, Göttingen, 2014. (PDF)

M. Firouzi, C. Axenie, J. Conradt, Multi-sensory cue integration with reliability encoding, using Line Attractor Dynamics, searching for optimality, Bernstein Conference on Computational Neuroscience, Göttingen, 2014. (PDF)

C. Axenie, M. Firouzi, J. Conradt, Multisensory Integration Network for Mobile Robot Self-motion Estimation, Bernstein Conference on Computational Neuroscience, Tübingen, 2013. (PDF)

C. Axenie, J. Conradt, Synthesis of Distributed Cognitive Systems: Interacting Maps for Sensor Fusion, Bernstein Conference on Computational Neuroscience, München, 2012. (PDF)


Adaptive Nonlinear Control Algorithm for Fault Tolerant Robot Navigation

Today’s trends in control engineering and robotics are blending gradually into a slightly challenging area, the development of fault tolerant real-time applications. Hence, applications should timely deliver synchronized data-sets, minimize latency in their response and meet their performance specifications in the presence of disturbances. The fault tolerant behavior in mobile robots refers to the possibility to autonomously detect and identify faults as well as the capability to continue operating after a fault occurred. This work introduces a real-time distributed control application with fault tolerance capabilities for differential wheeled mobile robots (see Figure 8).

Figure 8

Furthremore, the application was extended to introduce a novel implementation for limited sensor mobile robots environment mapping. The developed algorithm is a SLAM implementation. It uses real time data acquired from the sonar ring and uses this information to feed the mapping module for offline mapping (see Figure 9).

Figure 9

The latter is running on top of the real time fault tolerant control application for mobile robot trajectory tracking operation (see Figures 10, 11).

Figure 10

Figure 11

Along the developed application, the mechanical design, electronics design and implementation were made as part of the BA and MA thesis projects.

Publications

Peer reviewed conference papers

Axenie Cristian, Stancu Alexandru, Zanoschi Aurelian, Pascalin Andrei, Perjeru Marius, Maftei Florentina, A Client-Server Based Real-Time Control Tool for Complex Distributed Systems, Proc. of 9th Real-Time Linux Workshop, Linz, Austria, November 2007 (PDF)

Cristian Axenie, “Mobile Robot Fault Tolerant Control. Introducing ARTEMIC.” In the 9th International Conference on Signal Processing, Robotics and Automation WSEAS Conference Proceedings Included in ISI/SCI Web of Science and Web of Knowledge, University of Cambridge, UK, February 2010 (PDF)

Cristian Axenie, “A New Approach in Mobile Robot Fault Tolerant Control. Minimizing Costs and Extending Functionality”, Included in ISI / SCI (Web of Science) WSEAS TRANSACTIONS ON SYSTEMS AND CONTROL 2010 (PDF)

Cristian Axenie, Cernega Daniela, “Mobile Robot Fault Tolerant Control”, IEEE/IACSIT ICIEE 2010 (International Conference on Information and Electronics Engineering), Shanghai, China, June 2010 (PDF)

Cristian Axenie, Cernega Daniela, “ Adaptive Sliding Mode Controller Design for Mobile Robot Fault Tolerant Control”, 19th IEEE International Workshop on Robotics in Alpe-Adria-Danube Region, Budapest, Hungary, June 2010 (PDF)

Cristian Axenie, Razvan Solea, “Real Time Control Design for Mobile Robot Fault Tolerant Control”, 2010 IEEE/ASME International Conference on Mechatronic and Embedded Systems and Applications, July 15-17, 2010, Qingdao, ShanDong, China (PDF)


Competitions / Hackathons

Merck Research "Future of AI Challenge" (August 2019)

IRENA (Invariant Representations Extraction in Neural Architectures) Team NeuroTHIx codebase. 1st Place at Merck Future of AI Research Challenge. https://app.ekipa.de/challenge/future-of-ai/about THI coverage: https://www.thi.de/suche/news/news/thi-erfolgreich-in-ai-forschungswettbewerb

Merck Research Challenge aimed to generate insights from various disciplines that can lead to progress towards an understanding of invariant representation - that are novel and not based on Deep Learning.

IRENA (Invariant Representations Extraction in Neural Architectures) is the approach that team NeuroTHIx developed. IRENA offers a computational layer for extracting sensory relations for rich visual scenes, withy learning, inference, de-noising and sensor fusion capabilities. The system is also capable, through its underlying unsupervised learning capabilities, to embed semantics and perform scene understanding.

Using cortical maps as neural substrate for distributed representations of sensory streams, our system is able to learn its connectivity (i.e., structure) from the long-term evolution of sensory observations. This process mimics a typical development process where self-construction (connectivity learning), self-organization, and correlation extraction ensure a refined and stable representation and processing substrate. Following these principles, we propose a model based on Self-Organizing Maps (SOM) and Hebbian Learning (HL) as main ingredients for extracting underlying correlations in sensory data, the basis for subsequently extracting invariant representations.    

University of Cambridge Hackathon - Hack Cambridge (January 2017)

  Microsoft Faculty Connection coverage (http://goo.gl/uPWGna) for project demo at Hack Cambridge 2017, 28 – 29 January 2017, University of Cambridge with a Real-time Event-based Vision Monitoring and Notification System for Seniors and Elderly using Neural Networks. It has been estimated that 33% of people age 65 will fall. At around 80, that increases to 50%. In case of a fall, seniors who receive help within an hour have a better rate of survival and, the faster help arrives, the less likely an injury will lead to hospitalization or the need to move into a long-term care facility. In such cases fast visual detection of abnormal motion patterns is crucial. In this project we propose the use of a novel embedded Dynamic Vision Sensor (eDVS) for the task of classifying falls. Opposite from standard cameras which provide a time sequenced stream of frames, the eDVS provides only relative changes in a scene, given by individual events at the pixel level. Using this different encoding scheme the eDVS brings advantages over standard cameras. First, there is no redundancy in the data received from the sensor, only changes are reported. Second, as only events are considered the eDVS data rate is high. Third, the power consumption of the overall system is small, as just a low-end microcontroller is used to fetch events from the sensor and can ultimately run for long time periods in a battery powered setup. This project investigates how can we exploit the eDVS fast response time and low-redundancy in making decisions about elderly motion. The computation back-end will be realized with a neural network classification to detect fall and filter outliers. The data will be provided from 2 stimuli (blinking LEDs at different frequencies) and will represent the actual position of the person wearing them. The changes in position of the stimuli will encode the possible positions corresponding to falls or normal cases. We will use Microsoft Azure ML Studio to implement a MLP binary classifier for the 4 (2 stimuli x 2 Cartesian coordinates - (x,y) in the field of view) dimensional input. We labelled the data with Fall (F) and No Fall (NF).  

ANDRITZ Pioneers Hackathon (January 2017)

  Best innovation idea at the ANDRITZ Pioneers Hackaton innovating for the international technology group ANDRITZ. Developed an artificial neural learning agent for automation process productivity enhancement.  

Wellcome Trust Hack The Senses Hackathon (June 2016)

  WIRED UK coverage ( http://goo.gl/5yQ1Fn ) at the Hack the Senses hackathon in London: How to hack your senses: from 'seeing' sound to 'hair GPS': "Two-man team HearSee built a headband that taps into synaesthesia, translating changes in frame-less video to sound allowing blind people or those with a weak vision to see motion. Roboticist and neurologist Cristian Axenie assembled the hardware in mere minutes – attaching a pair of cameras and wires to a terrycloth headband."  

Daimler FinTech Hackathon (April 2016)

  Awarded 1st prize (team) at the Daimler Financial Services Big Data Analytics Hackaton for the design of a neuro-fuzzy learning system for anomaly detection and user interaction in big data streams.  

Burda Hackdays (April 2016)

  Awarded special Microsoft Cognitive Technologies prize at the Burda Hackdays for the design of a neural learning system for inferring role assignments in working teams using psychometric data analytics.

Automotive Hackdays (March 2016)

  Awarded 1st prize (team) in the BMW Automotive Hackdays for the design of an inference system for driving profile learning and recommendation for skills improvement and predictive maintenance in car-sharing.