Welcome to World Data Congress

Lorem ipsum proin gravida nibh vel veali quetean sollic lorem quis bibendum nibh vel velit.

Evently

Simply enter your keyword and we will help you find what you need.

What are you looking for?

Good things happen when you narrow your focus
Welcome to Conference

Write us on info@worlddatacongress.com

  /  Agenda

Agenda

Name: Jackie Carter

University of Manchester, United Kingdom

Biography: Jackie Carter a professor of statistical literacy at The University of Manchester. In 2020 she achieved a Women in Data industry award and a National Teaching Fellowship. Jackie works to connect education and skills to workplace needs.  Her 2021 book “Work placements, internships and applied social research” covers the theory and practice of learning by doing. She is an elected member of the International Statistical Institute, a Fellow of the Academy of Social Sciences, and a member of the Economic and Social Research Council’s (ESRC) Strategic Advisory Council and a member of ESRC’s CLOSER Expert Group.

Title of the talk: Developing a future pipeline of applied social researchers through experiential learning: The case of a data fellows programme

Abstract: This paper presents an innovative model for developing data and statistical literacy in the undergraduate population through an experiential learning model developed in the UK. The national Q-Step (Quantitative Step change) programme (2013–2021) aimed to (i) create a step change in teaching undergraduate social science students quantitative research skills, and (ii) develop a talent pipeline for future careers in applied social research. We focus on a model developed at the University of Manchester, which has created paid work placement projects in industry, for students to practise their data and statistical skills in the workplace. We call these students data fellows.Our findings have informed the development of the undergraduate curriculum and enabled reflection on the skills and software that we teach. Data fellows are graduating into careers in fields that would previously have been difficult to enter without a STEM (Science, Technology, Engineering and Mathematics) degree. 70% of data fellows to date are female, with 25% from disadvantaged backgrounds or under-represented groups. Hence the programme also addresses equality and diversity.The paper documents some of the successes and challenges of the programme and shares insight into non-STEM pipelines into social research careers that require data and statistical literacy, A major advantage of our approach is the development of hybrid data analysts, who are able to bring social science subject expertise to their research as well as data and statistical skills.Focusing on the value of experiential learning to develop quantitative research skills in professional environments, we provoke a discussion about how this activity could not only be sustained but also scaled up. subsections.

Name: Moshood

Data Scientist and Machine Learning Engineer, United Kingdom

Biography: Moshood has two Msc degrees, with his most recent in Applied Artificial Intelligence and Data Analytics, at the University of Bradford. He is a passionate, reliable data scientist and machine learning engineer with multiple years of experience, wholly dedicated to creative problem-solving approaches to tackling challenges, with broad experience in consulting, investment, finance, energy and retail. Expertise is placed within machine learning and data science modelling, market analysis, forecasting, making informed decisions, business intelligence, business process optimisation, revenue assurance, performance analysis and customer need assessments. He is an avid community builder with research interests in responsible AI and data governance, as well as the application of AI in the energy, finance and healthcare sector, with huge rave acclaims.

Title of the talk: Critical Evaluation of The Future Role of Ai in Business and Society

Abstract: In contemporary economies, artificial intelligence (AI) and machine learning (ML) algorithms are frequently utilised in generating judgments that have far-reaching consequences for employment, education, access to finance, and a variety of other fields. The increasing level of advancements in artificial intelligence (AI) has substantially affected the functionality of societies and economies, prompting extensive debate over the merits and demerits of AI on the society and humanity at large. This research critically explored the benefits and demerits of artificial intelligence, in view of its impact on people, businesses, economies and the society, from an ethical, legal and governance point of view. However, while it is imperative that public welfare is religiously promoted and guarded, it is equally necessary to consider the interest and success of AI developer and their organisations. Therefore, it is essential to maintain an optimum balance between ethical principles. Our findings shows that experts are proposing an era of AI ethics that focuses on utilitarianism, which presents a balance between risks and benefits, and a movement from fundamental duty of care to civil responsibility for public good. National and continental associations have reacted promptly by establishing various regulations for the conduct of AI implementation in their jurisdictions. The General Data Protection Regulation (GDPR) for example, permits individuals to provide general consent in relation to their information. The continuous investment and research focus on further development of artificial intelligence, shows that the future of individual lives, businesses and economies will continuously be influenced by numerous everyday artificial intelligence functions.

Name: Saleh A.S. AlAbdulhadi

Designation: Prince Sattam bin Abdulaziz University, KSA

Biography: Dr. Saleh Al Abdulahdi has completed his PhD at the age of 28 years from UK, University of Aberdeen and postdoctoral studies from Aberdeen Children Hospital, Scotland, UK. He is Assistant Professor & Consultant, Medical Molecular Genetic, Founder and Chairman of Medical Molecular Genetic Unit, Applied Medical Science College, and Founder and chairman of Dr. Saleh office for Medical Genetics and Genetic Consultation (house of expertise). He has published more than 25 papers in reputed journals and has been serving as an editorial board member of repute.

Title of the talk: Artificial intelligence in medical genetic diagnostics and medical genetic laboratory training Abstract: Artificial intelligence (AI) is the science and engineering of making intelligent computer programs. It is related to the similar task of using computers to understand human intelligence, but AI does not have to confine itself to methods that are biologically observable. Deep learning is part of this evolution technology impower products and services that users are unaware of the complex data processing that is taking place in healthcare services. in medical molecular genomics diagnosis, a specific type of AI algorithm known as deep learning and the graphics processing units (GPUs) is used to process variant types and complex genetic databases. In this study, we examine different classes of problems that AI systems to solve and describe the clinical genetic diagnostic cases that benefit from these solutions for patient care and medical laboratory education and training in our medical genetic unite, here in Saudi Arabia. This includes emerging methods for tasks in genetic counselling, pedigree evaluation, gene to gene interaction, haplotyping and phenotype-to-genotype correlation. We found out that current modern DNA sequencing technology allows for the generation of genomic data uniformly and at scale, but integrating the phenotype data requires extension of work multi data collection modes, this tend to be slow, expensive, and loss of information. AI technology is an essential for improving the quality assurance in the field of clinical genomic diagnosis.

Name: Francisco Garcia

Designation: Founder Direcly, United States

Biography: Francisco is the founder of Direcly, a Google Cloud, Google Marketing Platform, and Adobe Analytics partner company. He has a background in marketing and is a certified data engineer, having previously worked at top ad agencies before transitioning to the tech industry. Francisco has received recognition from Google as a Champion Innovator in Data Analytics, and is actively involved in the community. He advises entrepreneurs through Google for Startups, leads the Google Developer Groups Cloud Miami, and conducts analytics and marketing seminars at universities.

Title of the talk: Using Data to Persuade and Influence

Abstract: With the vast amount of data available to us today, it can be overwhelming to try and process and make sense of it all. Our brains were not designed to handle this volume of information, and it can be difficult to cut through the noise and make informed decisions. This session aims to help attendees navigate these challenges and use data as a powerful tool for persuasion and influence.leveraging concepts from cognitive psychology, this session aims to help attendees understand the psychological principles behind effective data storytelling. By understanding how the brain processes and retains information, attendees will learn how to craft a narrative that is more engaging and persuasive. The session will also cover the use of visualizations to help illustrate data and make it more accessible to the audience, as well as strategies for building trust and overcoming objections. By the end of the session attendees will have a deep understanding of how to use data to persuade and influence others in a way that is both effective and practical.

Name: Patrick Henz

Designation: Primetals Technologies – a Mitsubishi Heavy Industries entity, United States

Biography: Patrick Henz is a Compliance Officer with over a decade of experience in the field. He started his career in 2007 at the Corporate Information Office and Compliance at Siemens, where he implemented the company’s Anti-Corruption program in Mexico and other Latin American countries. He is Head of Governance, Risk & Compliance at a leading engineering and plan construction company, based in Atlanta. He is a frequent speaker at university workshops and conferences, and also wrote the book “Into the Metaverse”, published in 2023.

Title of the talk: From Avatars to Virtual Beings

Abstract: This is a discussion of the history and evolution of the term “avatar” and its usage in technology. The origins of the word come from Hinduism and was later used by science fiction author Philip K. Dick in his work “The Exegesis of Philip K. Dick,” where he posits that humans are avatars of God with amnesia. The modern usage of the term as a graphical representation of a user in technology was popularized by game designer Richard Garriott in the 1980s with his role-playing game series Ultima. The discussion also explores the potential for avatars to be combined with Artificial Intelligence to create “Virtual Beings” that can act autonomously within virtual platforms, and the use of avatars in social media and meeting platforms. It also mentions that the more information the platform has about the user, the more realistic the avatar can act autonomously, and the existence of a “Personal Digital Twin” in the cloud that can be used to continue the activity of the avatar even when the user is disconnected.

Name: Murad M. Badarna

Designation: Yezreel Valley Academic College, Israel

Biography: Murad M. Badarna received his B.Sc. were in information systems from the University of Haifa. His M.Sc. in computer science from the University of Haifa and his Ph.D. in machine learning from the University of Haifa. Murad joined the department of Information Systems in both the University of Haifa and the Max Stern Yezreel Valley College. Murad Badarna’s main research interests are in the fields of machine learning especially selective sampling, active learning, and deep learning. Murad is also active in the high-tech industry field, He is the head of Research and Development department of the xBiDa company which provided combination of advanced video analytics technology and data science services.

Title of the talk: Unsupervised Methods to Deal with Unsupervised Mixed Data

Abstract: Background and goals: Clustering is an artificial intelligence technique that partitions objects into sub-groups. In clustering the goal is to group similar objects together and different objects into different groups. K-means is one of the well-known clustering algorithms and most popular. It works by assigning each point ( i.e. object) to the closest center and then update the centers based on those points. However, clustering data with mixed types (i.e., attributed with numerical and categorical types) is still a challenging open problem. In this study, we proposed a new k-means clustering algorithms for mixed datasets. Research method: Running K-means clustering algorithm requires a distance function in order to associate each point to the closest center, and mean formula in order to update the centers. Let {att_1, att_2…, att_N } assign the attributes, and let R_att={att_1,…,att_l },and C_att={att_(l+1),…,att_N} contains the numerical and categorical attributes, respectively. Then, the distance between two points X={x_1,x_2,…,x_N },Y={y_1,y_2,…,y_N } will be: dist(X,Y)=∑_(i=1)^l▒(x_i-y_i )^2 +∑_(i=l+1)^N▒〖¬(x_i=y_i ) 〗 Were, ¬(x_i=y_i )={█(1,if x_i≠y_i@0,if x_i=y_i )┤ Using this distance, we built a distance matrix that include all the distances between the points. Then we use the multi-dimensional-scaling technique in order to represent the objects in a continues form. As a result, the new space will include only continues attributes that reflects the actual similarity between the objects as in the original form. Then we run the original k-means clustering algorithm on the new space dataset.

Name: Loai Abdallah

Designation: Founder & CEO, Israel

Biography: Loai Abdallah is Working with a group of incredibly bright people who are obsessed with using innovative technological solutions to solve business challenges faced by other incredibly bright people. Analytics, Data, Adtech, and Marketing professional. Recognized as a Champion in Data Analytics by the Google Cloud Innovators program.Skilled in Google Cloud Platform, Google Marketing Platform, Google Analytics 4, data storytelling, business intelligence, innovative problem solving using the latest technology, product planning for design and road-mapping, team leadership, data engineering and cross-departmental work.

Title of the talk: Unsupervised Methods to Deal with Unsupervised Mixed Data

Abstract: : Background and goals: Clustering is an artificial intelligence technique that partitions objects into sub-groups. In clustering the goal is to group similar objects together and different objects into different groups. K-means is one of the well-known clustering algorithms and most popular. It works by assigning each point ( i.e. object) to the closest center and then update the centers based on those points. However, clustering data with mixed types (i.e., attributed with numerical and categorical types) is still a challenging open problem. In this study, we proposed a new k-means clustering algorithms for mixed datasets. Research method: Running K-means clustering algorithm requires a distance function in order to associate each point to the closest center, and mean formula in order to update the centers. Using this distance, we built a distance matrix that include all the distances between the points. Then we use the multi-dimensional-scaling technique in order to represent the objects in a continues form. As a result, the new space will include only continues attributes that reflects the actual similarity between the objects as in the original form. Then we run the original k-means clustering algorithm on the new space dataset.

Name: Gal Hever

Designation: Tel Aviv, Israel

Biography: MSc in Data Science, with over a decade of accumulated expertise in Machine Learning & Data Analytics from academy, and industry. Deploying algorithms to production by applying data-driven Machine Learning & AI solutions end to end, starting from research to development and testing.Co-founder of Tech7Juniors association for youth development in tech domains. DataNights founder, aimed at increasing the presence of women in the high-tech industry.

Title of the talk: WER We Are?
Abstract: Automatic speech recognition (ASR) is a rapidly evolving field that has made significant progress in recent years, but it still faces several challenges and difficulties. One major challenge is the variability of human speech, which can be affected by factors such as accent, background noise, and speaking style. Another challenge is the lack of large, high-quality annotated datasets, which are necessary for training and evaluating ASR systems. In addition, there are ongoing efforts to improve the performance of ASR systems on low-resource languages and underrepresented accents. Despite these challenges, there have been significant advances in ASR technology, including the development of deep learning models and the use of self-supervised learning techniques to improve performance on a wide range of tasks. In this talk, we will discuss the recent progress and ongoing challenges in ASR, as well as the potential future directions for the field.

Name: Dawei Wang

Designation: China University of Petroleum (East China), China States

Biography: Dawei Wang has received the B.S.degree in electronic information science and technology from Qingdao Agricultural University, Qingdao, China, in 2017, the M.S. degree in Agricultural Information Technology from Qingdao Agricultural University, Qingdao, China, in 2020 and now is phD students in China University of Petroleum (Esat China). He researches interests is SAR oil spill detection.

Title of the talk: An improved deep learning model for oil spill detection by polarimetric features from SAR images

Abstract: Oil spill pollution at sea causes great damage to marine especially ecosystems environment. Quad-polarimetric Synthetic Aperture Radar (SAR) has become an important technology since it can provide polarization feature for marine oil spill detection. Oil spill detection can be achieved using deep learning models based on polarimetric features. However, insufficient feature extraction due to model depth, small reception field lend to loss of target information and fixed hyperparameter for models, effect of oil spill detection is still incomplete or misclassified. To solve the above problems, we propose an improved deep learning model named BO-DRNet. The model can obtain more sufficiently and fuller feature by ResNet-18 as backbone in encoder of DeepLabv3+ and Bayesian Optimization (BO) was used to optimize model’s hyperparameters. Experiments were conducted based on ten well-known polarimetric features were extracted from three quad-polarimetric SAR images obtained by RADASAT-2. Experimental results show that compared with other deep learning models, BO-DRNet perform best with mean accuracy of 74.70% and mean dice function of 0.8552. Current work in this paper provides a valuable tool to manage the upcoming disaster effectively.

Name: Lena Sasal Sorbonne

Designation: University, United Arab Emirates

Biography: Léna Sasal is a phd Candidate from Sorbonne University. She is working in forecasting using Deep Learning techniques while applying it on oil and gas application with totalenergies in Abu Dhabi.

Title of the talk: W-Transformer : A Wavelet-based Transformer Framework for Univariate Time Series Forecasting

Abstract: Deep learning utilizing transformers has recently achieved a lot of success in many vital areas such as natural language processing, computer vision, anomaly detection, and recommendation systems, among many others. Among several merits of transformers, the ability to capture long-range temporal dependencies and interactions is desirable for time series forecasting, leading to its progress in various time series applications. In this paper, we build a transformer model for non-stationary time series. The problem is challenging yet crucially important. We present a novel framework for univariate time series representation learning based on the wavelet-based transformer encoder architecture and call it W-Transformer. The proposed W-Transformers utilize a maximal overlap discrete wavelet transformation (MODWT) to the time series data and build local transformers on the decomposed datasets to vividly capture the nonstationarity and long-range nonlinear dependencies in the time series. Evaluating our framework on several publicly available benchmark time series datasets from various domains and with diverse characteristics, we demonstrate that it performs, on average, significantly better than the baseline forecasters for short-term and long-term forecasting, even for datasets that consist of only a few hundred training samples.

Name: Gerry Wolff

Designation: CognitionResearch.org, UK

Biography: Gerard Wolff PhD CEng MIEEE is the Director of Cognition Research.org. He has held academic posts in the School of Computer Science and Electronic Engineering, Bangor University, the Department of Psychology, University of Dundee, and the University Hospital of Wales, Cardiff. He has held a Research Fellowship at IBM, Winchester, UK, and has been a Software Engineer with Praxis Systems plc. He received the Natural Sciences Tripos degree from Cambridge University, Cambridge, and the PhD degree from the University of Wales, Cardiff. He is also a Chartered Engineer and Member of the IEEE.

Title of the talk: THE SP THEORY OF INTELLIGENCE AND ITS POTENTIAL IN ROBOTICS

Abstract: The SP System, meaning the SP Theory of Intelligence and its realisation in the SP Computer Model, is the product of a lengthy programme of research, which now provides solutions or potential solutions to several problems in AI research. There is an extended overview of the SP System in ,and there is a much more comprehensive description in. This presentation is about how the SP System may prove useful in the development of intelligence in robots. A peer-reviewed, published, paper about this is in. The main theme of this presentation is generality, as described in the following subsections.

Generality needed for AI in robots: Where some degree of autonomy and intelligence are required in robots, it seems fair to say that capabilities that have been developed so far are quite narrowly specialised, such as vacuum cleaning an apartment or a house, navigating a factory floor, walking over rough ground, and so on. It seems fair to say that there is a pressing need to provide robots with human-like generality and adaptability in intelligence.

Generality in the development of the SP System: The overarching goal in the development of the SP System has been to search for a framework that would simplify and integrate observations and concepts across artificial intelligence, mainstream computing, mathematics, and human learning, perception, and cognition. Despite the ambition of this goal, it seems that promising solutions have been found (next). There are of course reasons worry about the development of super-intelligence in robots, but that is outside the scope of this presentation.

Name: Shreyas Sundares

Designation: Course5 Intelligence, UAE

Biography: Professional with two decades into driving Hyper-growth & Strategic Transformation at Global Fortune Enterprises. Growth & Success Partner to 25+ organizations across industry, Paste your organization Logo here Awarded Data & Analytics leader for positively impacting Growth, Strategic Partnerships, Customer Experience, Risk Management, innovative Technological Advances, Operational innovation & efficiency & Culture Development. Directed multidisciplinary COE teamsthrough the build & Operationalization of Bespoke Solutions and Product Accelerators led by Technological Advances in Big Data Engineering, Cloud Platform Integration, Advanced Analytics & Insights, Artificial Intelligence, Intelligent Automation& Process Reengineering. Notable credentials in Business leadership from Stanford University and Business transformation from Northwestern University, USA

Title of the talk: DATA STRATEGY AND GOVERNANCE WILL DEFINE SUCCESS OF ORGANIZATIONS, BUSINESS AND INNOVATION

Abstract: Every organization and every business is today trying to create a unique and better experience for its customers. Everything from how we live to how we work to how we play is being monitored to analyse and arrive at offer something better the next time. This has not only led to enormous innovation but also continues to offer the best and consumers are at the receiving end of all the goodness. This also has created a very competitive world where each organization wants to better bigger and faster and be ahead in competition. With this, Data which is commons known as the new gold or oil or coal, is at the centre of all of this innovation, competitive landscape and enhanced customer experience. While every organization is trying and doing their bit in terms of taming the data and putting it into meaningful sense, Data Strategy & Governance emerges as a top priority for every organization and business. Having led Digital Transformation across several large entities and global fortune enterprises, the observation has been that there is no perfect recipe for Data and its success. However, how an organization is bringing in the data, how the data once brought in is being housed, safeguarded and processed, how the data is used for ambitious initiatives, monetization of the data and finally dissemination of data is an end to end life cycle that needs utmost care at each stage for an augmented impact. In my talk I would discuss the Data Strategy and Governance, its importance, some standards & best practices what some of the most successful organizations are doing in this field and key considerations for each entity.

Name: ANDREW ERNEST RITZ

Designation: Langtech, Poland

Biography: Andrew Ernest Ritz has Masters degrees in Ergonomics (London University) and Signal Processing and Machine intelligence (Surrey University). He has been working on Artificial Intelligence related problems since participating in the UK Alvey program in the 1980s. Recently, he is focusing on making the AI applications he has developed for teaching English, available on the web. These tools were created while working for his own company, Langtech, and funded in part by E NET Production, Katowice, Poland.

 

Title of the talk: REPRESENTING AND REASONING ABOUT VIEWPOINT WITHIN THE CONTEXT OF COMPUTER VISION, COMPUTER GRAPHICS AND ROBOTICS

Abstract: The concept of Viewpoint Reasoning was proposed a number of years ago within the context of 3D Model Based Vision for representing viewpoint information about a scene and the objects therein. This concept is based on a representation that stores feature visibility in terms of 3D surfaces or solids that encapsulate individual objects or the whole scene, much in the same way as property spheres and the aspect graph. When the visibility of objects and their features are represented in this way, answers to questions about the joint visibility of features from viewpoints within a scene are at hand. With respect to computer graphics, viewpoint reasoning requires semantics to be routinely associated with 3D objects and the scenes they are placed in. Once this is done a large number of options become available for interacting with a scene and individual objects in a task-oriented way and not just in terms of geometry. Viewpoint Reasoning can be applied within the areas of Computer Graphics and Robotics to problems such as Good Viewpoint Selection, Sensor Placement, Motion Planning and Object Exploration. This talk describes a system, written in the programming language Python, which demonstrates how such tasks can be performed even when complex objects are involved

Name: MARTA CHINNICI

Designation: ENEA, Italy

Biography: She is currently a senior researcher at the Italian National Agency for New Technologies, Energy and Sustainable Economic Development (ENEA) in the Energy Technology Department, where she works on ICT for energy efficiency issues. She received her master’s degree in mathematics in 2004, and her PhD in applied mathematics and computer science in 2008 from the University of Naples, Italy, with a thesis on stochastic self-similar processes and application in non-linear dynamical systems. Prior to her appointment with ENEA, she was a fellow and research assistant at the University of Salerno and a postdoctoral research fellow at ENEA.  Marta has held different professorship positions while at ENEA, including Adjunct Professor in Qualitative Methods and Mathematics for Economic and Business, and Professor in Mathematics and Economics for the MBA course at the Link Campus University, Rome, She is involved in many national projects and international research projects (also as project leader). Title of the talk: SMART CITY PLATFORM: SCALABILITY, INTEROPERABILITY AND REPLICABILITY PLATFORM TO MANAGE URBAN APPLICATIONS

Abstract: In a smart city environment, the explosive growth in the volume, speed, and variety of data being produced every day requires a continuous increase in the processing speeds of servers and entire network infrastructures, platforms as well as new resource management models. This poses significant challenges (and provides attractive development opportunities) for data-intensive and high-performance computing, i.e., how to turn enormous datasets into valuable information and meaningful knowledge efficiently. The variety of sources complicates the task of context data management such as data derives from, resulting in different data formats, with varying storage, transformation, delivery, and archiving requirements. At the same time, rapid responses are needed for real-time applications. With the emergence of cloud infrastructures and platforms, achieving highly scalable data management in such contexts is a critical problem, as the overall urban application performance is highly dependent on the properties of the data management service. This means, continuously developing and adopting ICT technologies to create and use platforms for government, business and citizens can communicate and work together and provide the necessary connections between the networks that are the base for the services of the smart city .The main features of a generic Smart City Platform (SCP) are in the following Make data, information, people and organizations smarter Redesign the relationships between government, private sector, non-profits, communities and citizens Ensure synergies and interoperability within and across city policy domains and systems (e.g. transportation, energy, education, health & care, utilities, etc.)Drive innovation, for example, through so-called open data, living labs and tech-hub.In this work, the authors propose an approach and describe a methodology and a modular and scalable multi-layered ICT platform called ENEA Smart City Platform (ENEA-SCP) to address the problem of cross-domain interoperability in the context of smart city applications. The ENEA-SCP is implemented following the Software as a Service (SaaS) paradigm, exploiting cloud computing facilities to ensure flexibility and scalability. Interoperability and communication are addressed employing web services, and data format exchange is based on the JSON data format. By taking into account these guidelines as references, this work provides a description of the SCP developed by ENEA and its potential use for smart and IoT city applications. The solution provided by ENEA SCP to exploit potentials in Smart City environments is based on four fundamental concepts: Open Data,Interoperability, Scalability, Replicability. In this scenario, the ENEA SCP is going to tackle the issues concerning these two aspects providing a reference framework of modular [2] specifications for stakeholders willing to implement ICT platforms to exploit the Smart City vision potentials and therefore offer new services for the citizen. The ENEA Smart City Platform exploits computational resources of the ENEAGRID infrastructure [3], as it is deployed in the cloud hosted in the Portici Research Center site. The creation of a customized environment ENEA cloud-based platform is possible thanks to the virtualization technologies of VMWARE platform, which allows hosting the management, the transportation and the processing of project data services, ensuring their availability and protection over time. More in detail, the SCP is composed by six Virtual Machines (VMs), and each of them hosts a component with a specific role.

Name: Bilal Alhayani

Designation: Yildiz Technical University, Turkey

Biography: Bilal  AL-Hayani   received the B.Sc. degree in Laser Engineering from university of Technology –Baghdad, Iraq in 1999 to 2004 , and the M.Sc. degree in  electronics and telecommunication engineering and electromagnetic  from University of Pune ,India  from 2011 to 2013 he was  joined  in Ph.D . researcher in 2014 in Electronics and communication Department in  Yildiz Technical University ,Istanbul, Turkey . His general research interests lie in signal processing and communication theory  and signal coding  on wireless communication and  image processing specific  research  area include cooperative communication techniques He published  a lot off  original articles  of  papers  research in Sci  (science citation index expanded ) journals index and in scopus in SAGE and ASP publisher.

Title of the talk: Establishment Of Body Auto Fitting Model BAFM Using NJ-GPM At Toyota

Abstract: The wireless sensor network technologies are considered as one of the key research areas in computer science and healthcare application industries. The pervasive healthcare systems The Internet of Medical Things (MIoT) is an amalgamation of wearable medical devices and mobile applications that can connect to healthcare sensors  information technology systems using networking technologies. (MIoT)  The impact of IoT in healthcare, although still in its initial stages of development has been significant , attempts to review and understand the applications of MIoT in personalised healthcare will promote personalized care and higher standards of care and living through individual data-driven treatment regimens as well as optimized devices tailored to individual physiological requirements. The IoMT senses the patients’ health status and then transfers the medical data to doctors and healthcare providers with the help of remote cloud data centres. These data are used for disease diagnosis and in medical care decision-making. The main challenge in MIo is how to manage the large amount of medical data. Potential topics include but are not limited to:IoT measurement and control And new methodology of optical sensor network for detection.

  • High-level methods and tools for node and application design healthcare sensors
  • The physical environment, both local and remote sensors.
  • Biomedical sensors network and and identification of biological and chemical agents
  • Image transmission using in healthcare sensors.

Name: LUCA GIRALDI

Designation: EMOJ, Italy

Biography: Luca Giraldi is CEO of EMOJ, offering advanced technologies based on Artificial Intelligence techniques to revolutionize the world of Customer Experience. Its motto is “we are an artificial intelligence company, but we put human at first before artificial”. EMOJ operates in the field of automotive, retail and culture and is considered by Unicredit Startlab and Bocconi among the 10 successful Italian startups. Luca received a PhD in Industrial Engineering in 2019 and he is expert of digital transformation, customer experience, emphatic marketing.

Title of the talk: THE USE OF ARTIFICIAL INTELLIGENCE FOR EMOTION-AWARE CAR INTERFACE

Abstract: Nowadays, driver monitoring is a topic of paramount importance, because distraction and inattention are a relevant safety concern crashes .Currently, Driver Monitoring Systems collect and process dynamic data from embedded sensors in the vehicle and RGB-D cameras detecting visual distraction and drowsiness, but neglect the driver’s emotional states, despite research demonstrated precisely that emotion and attention are linked, and both have an impact on performances For instance negative emotions can alter perception and decision-making

Consequently, explaining complicated phenomena such as the effects of emotions on driving and conceiving how to use emotions to decrease distraction need to be explored. Today several methods and technologies allow the recognition of human emotions, which differ in level of intrusiveness. Invasive instruments based on biofeedback sensors can affect the subjects’ behaviour and the experienced emotions. Non-intrusive emotion recognition systems, i.e., those based on speech recognition analysis and facial emotion analysis, implement Convolutional Neural Networks (CNN) for signal processing. However, no study has actually tested their effectiveness in a motor vehicle to enable emotion regulation. In this context the research focuses on the introduction of a multimedia sensing network and an affective intelligent interface able to monitor the emotional state and degree of driver’s attention by analysing the persons’ facial expressions and map the recognized emotions with the car interface in a responsive way to increase human wellbeing and safety. The adopted technology for emotion recognition is reported in and is extended with additional features specific for vehicle environment to implement emotion regulation strategies in the human-machine interface to improve driving safety. The result is an emotion-aware interface able to detect and monitor human emotions and to react in case of hazard, e.g. providing warning and proper stimuli for emotion regulation and even acting on the dynamics of the vehicle.

Name: Jemili Farah

Designation: University of Sousse, TUNISIA

Biography: Farah JEMILI had the Engineer degree in Computer Science in 2002 and the Ph.D degree in 2010. She is currently Assistant Professor at Higher Institute of Computer Science and Telecom of Hammam Sousse (ISITCOM), University of Sousse, Tunisia. She is a senior Researcher at MARS Laboratory (ISITCOM –Tunisia). Her research interests include Artificial Intelligence, Cyber Security, Big Data Analysis, Cloud Computing and Distributed Systems. She served as reviewer for many international conferences and journals. She has many publications; 6 book chapters, 6 journal publications and more than 20 conference papers.

Title of the talk: Deep Learning for Intrusion Detection

Abstract: In recent years, the world has seen a significant evolution in the different areas of connected technologies such as smart grids, the Internet of vehicles, long-term evolution, and 5G communication. By 2023, it is expected that the number of IP-connected devices will be three times larger than the global population, and the total number of DDoS attacks will double from 7.9 million in 2018 to 15.4 million by 2023 as reported by Cisco.As of 2020, the amount of data generated each day is exceeding petabytes and this includes the traces that internet users make when they access a website, mobile application or a network.

This growth gave more space to hackers to launch their malicious attacks and use development techniques and tools for intrusion. Intrusion detection systems (IDSs) are one of the most important systems used in cyber security. Intrusion detection systems are the hardware or software that monitors and analyzes data flowing through computers and networks to detect security breaches that threaten confidentiality, integrity and availability of a system’s resources. According to IBM, the costs of data breaches have increased from $3.86 million to $4.24 million in 2021. This is a loss that no business would be able to sustain. That’s why, as Forbes reports, 83% of enterprise workloads will move to the cloud by 2020, making it necessary to develop new and efficient IDS.Deep learning for intrusion detection is one of the hot topics in recent academic research. With the improvement of computing power and the rapid growth of the volume of data, the development of deep learning have attracted people’s attention again, so that the practicality and popularity of deep learning have greatly improved.Deep learning is an advanced branch of machine learning that uses multi layer networks. The layers are connected by neurons, which represent the mathematical computation of the learning processes.Intrusion detection has been widely studied in both industry and academia, but cybersecurity analysts always want more accuracy and global threat analysis to secure their systems in cyberspace. Big data represents the great challenge of intrusion detection systems, making it hard to monitor and analyze this large volume of data using traditional techniques. Recently, deep learning has been emerged as a new approach which enables the use of Big Data with a low training time and high accuracy rate. This contribution proposes an approach of an IDS based on cloud computing and the integration of big data and deep learning techniques to detect different attacks as early as possible. To demonstrate the efficacy of this system, the proposed system is implemented within Microsoft Azure Cloud, as it provides both processing power and storage capabilities, using a convolutional neural network (CNN–IDS) with the distributed computing environment Apache Spark, integrated with Keras Deep Learning Library. The proposed approach includes the pre-processing of data and deep learning. The experimental results show effectiveness of the approach in terms of accuracy and detection rate due to the integration of deep learning technique and Apache Spark engine.

Name: Ghazal Azarfar

Designation: University of Saskatchewan, Canada

Biography: Data Scientist · Machine Learning Engineer · Applied Scientist · Associate Researcher · Computer Vision Engineer Title of the talk: Deep Learning to Estimate Age from Chest CT Scans Abstract: Purpose The objective of our study was to estimate a patient’s age from a chest CT scan and assess whether the CT estimated age is a better predictor of lung cancer risk than chronological age.Methods Composite images were created to develop an age prediction model based on Inception-ResNet-v2. We used 13824 chest CT scans from the National Lung Screening Trial (NLST) for training (91%), validation (5%), and testing (4%). We independently tested the model using 1849 CT scans collected in Saskatoon, Canada. We then assessed the CT estimated age as a risk factor for lung cancer screening using the NLST dataset. We calculated a relative lung cancer risk between two groups; group 1: those who are assigned a CT age older than their chronological age, and group 2: those who are assigned a CT age younger than their chronological age.Results Comparing the chronological age with the estimated CT age resulted in a mean absolute error of 1.91 years, and Pearson’s correlation coefficient of 0.9. The area associated with the lungs seemed to be the most activated region in the age estimation model. The relative lung cancer risk of 1.8 (95% confidence level) was calculated between the two groups, indicating a positive association between having an older Chest CT age (than chronological age) and having lung cancer.Conclusion Our results show that CT estimated age may be a better predictor of lung cancer than chronological age.

Abstract: In a smart city environment, the explosive growth in the volume, speed, and variety of data being produced every day requires a continuous increase in the processing speeds of servers and entire network infrastructures, platforms as well as new resource management models. This poses significant challenges (and provides attractive development opportunities) for data-intensive and high-performance computing, i.e., how to turn enormous datasets into valuable information and meaningful knowledge efficiently. The variety of sources complicates the task of context data management such as data derives from, resulting in different data formats, with varying storage, transformation, delivery, and archiving requirements. At the same time, rapid responses are needed for real-time applications. With the emergence of cloud infrastructures and platforms, achieving highly scalable data management in such contexts is a critical problem, as the overall urban application performance is highly dependent on the properties of the data management service. This means, continuously developing and adopting ICT technologies to create and use platforms for government, business and citizens can communicate and work together and provide the necessary connections between the networks that are the base for the services of the smart city .The main features of a generic Smart City Platform (SCP) are in the following Make data, information, people and organizations smarter Redesign the relationships between government, private sector, non-profits, communities and citizens Ensure synergies and interoperability within and across city policy domains and systems (e.g. transportation, energy, education, health & care, utilities, etc.)Drive innovation, for example, through so-called open data, living labs and tech-hub.In this work, the authors propose an approach and describe a methodology and a modular and scalable multi-layered ICT platform called ENEA Smart City Platform (ENEA-SCP) to address the problem of cross-domain interoperability in the context of smart city applications. The ENEA-SCP is implemented following the Software as a Service (SaaS) paradigm, exploiting cloud computing facilities to ensure flexibility and scalability. Interoperability and communication are addressed employing web services, and data format exchange is based on the JSON data format. By taking into account these guidelines as references, this work provides a description of the SCP developed by ENEA and its potential use for smart and IoT city applications. The solution provided by ENEA SCP to exploit potentials in Smart City environments is based on four fundamental concepts: Open Data,Interoperability, Scalability, Replicability. In this scenario, the ENEA SCP is going to tackle the issues concerning these two aspects providing a reference framework of modular [2] specifications for stakeholders willing to implement ICT platforms to exploit the Smart City vision potentials and therefore offer new services for the citizen. The ENEA Smart City Platform exploits computational resources of the ENEAGRID infrastructure [3], as it is deployed in the cloud hosted in the Portici Research Center site. The creation of a customized environment ENEA cloud-based platform is possible thanks to the virtualization technologies of VMWARE platform, which allows hosting the management, the transportation and the processing of project data services, ensuring their availability and protection over time. More in detail, the SCP is composed by six Virtual Machines (VMs), and each of them hosts a component with a specific role.

Name: ADAM ALONZI

Designation: Interdisciplinary Analyst at EthicsNet, USA.

Biography: Adam Alonzi is a futurist, writer, biotechnologist, programmer, and documentary maker. He is an interdisciplinary analyst for EthicsNet, a nonprofit building a community with the purpose of co-creating a dataset for machine ethics algorithms. He also serves as the Head of New Media at BioViva Science, as an analyst for the Millennium Project, and as a consultant for a number of technology startups. Title of the talk: THE FOUNDATIONS OF ROBO SOCIOLOGY: VALUES AND THE AGGREGATE BEHAVIOURS OF SYNTHETIC INTELLIGENCES. Abstract: Outcomes on the macro level often cannot be accurately extrapolated from the microbehaviors of individual agents. The interdependence of complex system’s components makes simulation a viable option for exploring cause and effect relationships within it (Miller and Page, 2009). Chaos theory emphasizes the sensitivity of such networks to starting conditions (Boccaletti, 2000), which strongly suggests thought should be put into the architecture of an AGI “society” before it begins to take shape. Protocols for emergency interventions should certainly be in place, but the network itself should be robust enough from the beginning  to handle sudden deviations from basic ethical precepts by one or more of its members.Outside of its context, and without any information about the parts to which it is connected, a cell or leaf or animal  can be studied, but not understood in a meaningful way (Mitchell, 2009). Creating moral agents in a hyperconnected world will involve modeling their interactions with entities like and unlike themselves in the face of both predictable and unforeseen events. This will be helpful as groups can behave differently than their individual parts (Schelling, 1969) Keeping AI friendly does not end with giving each AI a set of maxims before letting them loose, but satisfactorily explicating upon the emergent phenomena that arise from the interactions of similarly or differently “educated” machines. Because of the near certainty that synthetic intelligences will communicate rapidly and regularly, it is imperative that thought leaders in AI safety begin thinking about how groups of artificially intelligent agents will behave.

Name: Qamar Wali

Designation: National University of Technology, Pakistan

Biography: Dr. Qamar Wali has been working as Assistant Professor of Physics at National University of Technology since September 2018. He has earned PhD in Advanced Materials from Universiti Malaysia Pahang in 2016 and currently involved on renewable energy technology research, particularly perovskite solar cells. He has published more than 30 research articles so far in world renowned journals with citations > 1300. He has also obtained a Malaysian Patent on the topic of ‘multichannel nanotubular metal oxide’ and also is a recipient of the award of Research Fund for International Young Scientists (RFIS-I), funded by National Natural Science Foundation of China. During the teaching, he has successfully implemented the OBE system for Applied Physics in different engineering programs. He has been involved in a project “Third Generation Photovoltaics for Building Integration: A Smart and Sustainable Energy Solution” with US-PAK Center for Energy at University of Engineering and Technology, Peshawar, Pakistan.

Title of the talk: Semi-transparent solar panels for smart buildings

Abstract: Considering the ongoing energy crises in developing countries such as Pakistan, fast, reliable, simple, and less time-consuming as well as cost-effective methods of producing electricity are required to replace the complicated procedures of generating electricity using conventional methods. Global demand for energy is increasing rapidly, because of population and economic growth, especially in emerging market economies. Cities consume more than two-thirds of the world’s energy resources and are responsible for around the same share of CO2 emissions. Buildings alone are responsible for 36% of global energy consumption and nearly 40% of total direct and indirect CO2 emissions. At the current pace, the global energy use in buildings could double or even triple by 2050, as the world’s population living in cities is projected to increase further in the next decades. As per new policy, all new buildings that will be occupied by public authorities should be nearly zero energy rated. This means that new buildings must generate their own energy from renewable sources and not be wholly reliant on traditional grid-based forms of fossil fuel related energy. Integrating photovoltaics in buildings represents a feasible solution towards energy efficient buildings and in order to achieve sustainable goals in cities, harvesting the full potential of the building (facades, windows) for renewable energy generation is required. Building integrated photovoltaics (BIPV) potential to integrate into the building envelope holds aesthetic appeal for architects, builders, and property owners and is a market sector that is expected to grow dramatically over the next 5–10 years. Among all existing photovoltaic technologies, 3rd generation solar cells have attracted substantial interest in BIPV due to reduced cost and possess a number of key advantages, including low weight, aesthetic value for architects and can be printed in any pattern making relatively low cost BIPVs, without significant compromise in efficiency. Keywords: Solar energy material; Semi-transparent devices; nanomaterials; light weight; flexibility;