Name: Gerry Wolff

Designation:, UK

Biography: Gerard Wolff PhD CEng MIEEE is the Director of Cognition He has held academic posts in the School of Computer Science and Electronic Engineering, Bangor University, the Department of Psychology, University of Dundee, and the University Hospital of Wales, Cardiff. He has held a Research Fellowship at IBM, Winchester, UK, and has been a Software Engineer with Praxis Systems plc. He received the Natural Sciences Tripos degree from Cambridge University, Cambridge, and the PhD degree from the University of Wales, Cardiff. He is also a Chartered Engineer and Member of the IEEE.


Abstract: The SP System, meaning the SP Theory of Intelligence and its realisation in the SP Computer Model, is the product of a lengthy programme of research, which now provides solutions or potential solutions to several problems in AI research. There is an extended overview of the SP System in ,and there is a much more comprehensive description in. This presentation is about how the SP System may prove useful in the development of intelligence in robots. A peer-reviewed, published, paper about this is in. The main theme of this presentation is generality, as described in the following subsections.


Generality needed for AI in robots

Where some degree of autonomy and intelligence are required in robots, it seems fair to say that capabilities that have been developed so far are quite narrowly specialised, such as vacuum cleaning an apartment or a house, navigating a factory floor, walking over rough ground, and so on. It seems fair to say that there is a pressing need to provide robots with human-like generality and adaptability in intelligence.

Generality in the development of the SP System

The overarching goal in the development of the SP System has been to search for a framework that would simplify and integrate observations and concepts across artificial intelligence, mainstream computing, mathematics, and human learning, perception, and cognition. Despite the ambition of this goal, it seems that promising solutions have been found (next). There are of course reasons worry about the development of super-intelligence in robots, but that is outside the scope of this presentation.

Name: Shreyas Sundares

Designation: Course5 Intelligence, UAE

Biography: Professional with two decades into driving Hyper-growth & Strategic Transformation at Global Fortune Enterprises. Growth & Success Partner to 25+ organizations across industry, Paste your organization Logo here Awarded Data & Analytics leader for positively impacting Growth, Strategic Partnerships, Customer Experience, Risk Management, innovative Technological Advances, Operational innovation & efficiency & Culture Development. Directed multidisciplinary COE teamsthrough the build & Operationalization of Bespoke Solutions and Product Accelerators led by Technological Advances in Big Data Engineering, Cloud Platform Integration, Advanced Analytics & Insights, Artificial Intelligence, Intelligent Automation& Process Reengineering. Notable credentials in Business leadership from Stanford University and Business transformation from Northwestern University, USA



Abstract: Every organization and every business is today trying to create a unique and better experience for its customers. Everything from how we live to how we work to how we play is being monitored to analyse and arrive at offer something better the next time. This has not only led to enormous innovation but also continues to offer the best and consumers are at the receiving end of all the goodness. This also has created a very competitive world where each organization wants to better bigger and faster and be ahead in competition. With this, Data which is commons known as the new gold or oil or coal, is at the centre of all of this innovation, competitive landscape and enhanced customer experience. While every organization is trying and doing their bit in terms of taming the data and putting it into meaningful sense, Data Strategy & Governance emerges as a top priority for every organization and business. Having led Digital Transformation across several large entities and global fortune enterprises, the observation has been that there is no perfect recipe for Data and its success. However, how an organization is bringing in the data, how the data once brought in is being housed, safeguarded and processed, how the data is used for ambitious initiatives, monetization of the data and finally dissemination of data is an end to end life cycle that needs utmost care at each stage for an augmented impact. In my talk I would discuss the Data Strategy and Governance, its importance, some standards & best practices what some of the most successful organizations are doing in this field and key considerations for each entity.


Designation: Langtech, Poland

Biography: Andrew Ernest Ritz has Masters degrees in Ergonomics (London University) and Signal Processing and Machine intelligence (Surrey University). He has been working on Artificial Intelligence related problems since participating in the UK Alvey program in the 1980s. Recently, he is focusing on making the AI applications he has developed for teaching English, available on the web. These tools were created while working for his own company, Langtech, and funded in part by E NET Production, Katowice, Poland.



Abstract: The concept of Viewpoint Reasoning was proposed a number of years ago within the context of 3D Model Based Vision for representing viewpoint information about a scene and the objects therein. This concept is based on a representation that stores feature visibility in terms of 3D surfaces or solids that encapsulate individual objects or the whole scene, much in the same way as property spheres and the aspect graph. When the visibility of objects and their features are represented in this way, answers to questions about the joint visibility of features from viewpoints within a scene are at hand.

With respect to computer graphics, viewpoint reasoning requires semantics to be routinely associated with 3D objects and the scenes they are placed in. Once this is done a large number of options become available for interacting with a scene and individual objects in a task-oriented way and not just in terms of geometry.

Viewpoint Reasoning can be applied within the areas of Computer Graphics and Robotics to problems such as Good Viewpoint Selection, Sensor Placement, Motion Planning and Object Exploration. This talk describes a system, written in the programming language Python, which demonstrates how such tasks can be performed even when complex objects are involved


Designation: ENEA, Italy

Biography: She is currently a senior researcher at the Italian National Agency for New Technologies, Energy and Sustainable Economic Development (ENEA) in the Energy Technology Department, where she works on ICT for energy efficiency issues. She received her master’s degree in mathematics in 2004, and her PhD in applied mathematics and computer science in 2008 from the University of Naples, Italy, with a thesis on stochastic self-similar processes and application in non-linear dynamical systems. Prior to her appointment with ENEA, she was a fellow and research assistant at the University of Salerno and a postdoctoral research fellow at ENEA.  Marta has held different professorship positions while at ENEA, including Adjunct Professor in Qualitative Methods and Mathematics for Economic and Business, and Professor in Mathematics and Economics for the MBA course at the Link Campus University, Rome, She is involved in many national projects and international research projects (also as project leader).


Abstract: In a smart city environment, the explosive growth in the volume, speed, and variety of data being produced every day requires a continuous increase in the processing speeds of servers and entire network infrastructures, platforms as well as new resource management models. This poses significant challenges (and provides attractive development opportunities) for data-intensive and high-performance computing, i.e., how to turn enormous datasets into valuable information and meaningful knowledge efficiently. The variety of sources complicates the task of context data management such as data derives from, resulting in different data formats, with varying storage, transformation, delivery, and archiving requirements. At the same time, rapid responses are needed for real-time applications. With the emergence of cloud infrastructures and platforms, achieving highly scalable data management in such contexts is a critical problem, as the overall urban application performance is highly dependent on the properties of the data management service. This means, continuously developing and adopting ICT technologies to create and use platforms for government, business and citizens can communicate and work together and provide the necessary connections between the networks that are the base for the services of the smart city .The main features of a generic Smart City Platform (SCP) are in the following Make data, information, people and organizations smarter Redesign the relationships between government, private sector, non-profits, communities and citizens Ensure synergies and interoperability within and across city policy domains and systems (e.g. transportation, energy, education, health & care, utilities, etc.)Drive innovation, for example, through so-called open data, living labs and tech-hub.In this work, the authors propose an approach and describe a methodology and a modular and scalable multi-layered ICT platform called ENEA Smart City Platform (ENEA-SCP) to address the problem of cross-domain interoperability in the context of smart city applications. The ENEA-SCP is implemented following the Software as a Service (SaaS) paradigm, exploiting cloud computing facilities to ensure flexibility and scalability. Interoperability and communication are addressed employing web services, and data format exchange is based on the JSON data format. By taking into account these guidelines as references, this work provides a description of the SCP developed by ENEA and its potential use for smart and IoT city applications. The solution provided by ENEA SCP to exploit potentials in Smart City environments is based on four fundamental concepts: Open Data,Interoperability, Scalability, Replicability. In this scenario, the ENEA SCP is going to tackle the issues concerning these two aspects providing a reference framework of modular [2] specifications for stakeholders willing to implement ICT platforms to exploit the Smart City vision potentials and therefore offer new services for the citizen. The ENEA Smart City Platform exploits computational resources of the ENEAGRID infrastructure [3], as it is deployed in the cloud hosted in the Portici Research Center site. The creation of a customized environment ENEA cloud-based platform is possible thanks to the virtualization technologies of VMWARE platform, which allows hosting the management, the transportation and the processing of project data services, ensuring their availability and protection over time. More in detail, the SCP is composed by six Virtual Machines (VMs), and each of them hosts a component with a specific role.


Designation: University of Luxembourg, Luxembourg

Biography: Dr. Mohammad Zare is working as a senior scientist, with focus on flood simulation/prediction models applying machine learning (ML) and artificial intelligence (AI) techniques, spatial data management and developing remote sensing models at RSS-Hydro’s research and education department (RED), Luxembourg. He graduated from his PhD in July 2017 at the University of Kassel, faculty of Civil and Environmental Engineering, Germany. Dr. Zare has been working at several universities and research institutes in Germany and Luxembourg. During years past, his research ideas were funded through projects by different funding frameworks, including the German Academic Exchange Service (DAAD), EU-funded ICT-AGRI and the Luxembourg National Research Fund (FNR).


Abstract: Changing hydrological conditions are occurring all over the world, owing mostly to phenomena of climate change that affect atmospheric and earth surface processes. Temporal and spatial changes in rainfall have caused fundamental variations in the water cycle, including extremes such as flood events. Losses due to all types of floods are not only of economic nature (several billions of EUR every year at global level) but there is also considerable loss of lives. Responding appropriately to these threatening situations necessitates the use of innovative flood management techniques and technologies more than ever. The main priority of any flood management solution is to find suitable methods and models in order to manage floods better and to prepare facing this natural hazard and risk phenomenon and minimizing losses. Decision making and planning for the prediction of flood events and their generating processes require the use of adequate models and methods. In recent years, appropriate models and algorithms such as machine learning (ML) and deep learning (DL) have been developed and used in many research projects dealing with flood mapping. In this regard, the main purpose of this study is to present a review of applying ML/DL methods to process remotely sensed images for generating flood maps. Moreover, the basic concepts of some ML data driven methods are presented. Three case studies with different ML algorithms have been selected and are illustrated to provide a better understanding of their application in flood studies. The findings of these case studies show that ML models are mostly applied to identify and predict flooded versus non-flooded pixels in images but important challenges remain. These challenges need to be solved if ML is to be valuable for decision making processes related to flood management.


Designation: CoRover , India

Biography: Delivering Business Viable Tech Innovations, to solve important problems, is Ankush’s forte! Ankush is on the list of the Top 10 AI influencers of the world, & the top 50 Tech entrepreneurs!He is on a mission to enable users talk to any system the way they talk to an intelligent person, in the language, channel, and format of users’ choice via AI Virtual Assistant (VideoBot, VoiceBot, ChatBot) and Conversational Commerce (Video Commerce & Voice Commerce) which facilitates end-to-end transactions including payment.With 2 decades of experience in building SaaS products, Ankush has built global winning teams to build Scalable & Secure Enterprise Platforms. As a Technology leader, Ankush has led many startups & large MNCs across Asia, North America & Europe, and generated revenue of multi-million dollars. The peak team size Ankush managed/led is 1200 & peak software subscription/license fees from one customer was $5M PA.Honoring his technical, business & leadership acumen, many national & international level awards have been awarded to Ankush who is a MS (SW Engg.) from BITS, Pilani; MBA from ICFAI, Ex Management Program from IIM-C & Entrepreneurship 101 from MIT. Also, got many international certifications (like PMI PMP, ACP, ITIL, Six Sigma, CSM, CSP, etc.).Ankush is the brainchild behind, a VC backed company, CoRover, world’s first and the highest ROI delivering Human Centric Conversational AI Platform, powered with proprietary cognitive AI framework offering managed Chatbot as a Service (with self-service option) to help enterprises generate revenue, save cost, improve customer experience & achieve greater operational efficiency. Currently, CoRover the innovative & patent-pending platform is being used by 1B+ users. Besides steering business operations, Ankush also oversees key pillars behind CoRover’s success namely Product Management, R&D, Engineering, Customer Success, Operational Excellence, Marketing, Sales, GTM, Alliances, Partnerships, & Fund Raising. Ankush spearheads defining the Strategy & Vision of the company & believes in Collaboration, Teamwork, & Leadership through Involvement as the ethos defining success.Ankush was a Keynote speaker & a panelist in many national/international conferences & TV Channels (like CNBC, BBC, NDTV, Republic, Times Now, etc.); and, a guest faculty in the Top Colleges/Universities (like IIM, BITS, etc.) Mentored/Trained 10K+ folks (on Scrum, Agile, Project Management, Entrepreneurship, Startups, Artificial Intelligence). Featured in Forbes, Entrepreneur, Fortune, ET, FE, TOI, etc. Ted Speaker too!


Abstract: Data-driven innovation forms a key pillar in 21st-century sources of growth. Theconfluence of several trends, including the increasing migration of socio-economic activities to the Internet and the decline in the cost of data collection, storage, and processing, is leading to the generation and use of vast volumes of data – commonly referred to as “big data”. These large data sets are becoming a core asset in the economy, fostering new industries, processes, and products and creating significant competitive advantages. For instance:In business, data exploitation promises to create value in a variety of operations by optimization of value chains in global manufacturing and services more efficient use of labor, and tailored customer relationships.Over the last few decades, companies have increasingly looked to do business across borders, helping to drive economic growth around the world (see Japan, South Korea, Taiwan, Mainland China, and India). Companies doing business across borders have largely done so without the benefit of data because our data-providing institutions have been focused on the domestic market. This has meant higher risk, but typically the rewards of expanding into big new markets or finding massive cost savings through low-wage labor have more than compensated

the risk-takers. Now, economic development in these emerging markets is leading to an increase in data, as increasingly sophisticated governments work to make more data available in order to further grease the wheels of commerce. This seems to suggest that economic development comes first, then data, then more economic development.But what if data could come first? What if we could see more data coming out of emerging markets even before governments have the capacity to collect, and makeavailable, large amounts of data? Might the transparency provided by data givemore companies the confidence they need to do business in these markets – and infact jump-start the process of economic development?This brings us back to Big Data. Humanity is producing so much data these days not because we’ve all decided to make the production of data our top priority, orbecause the government has dramatically ramped up the amount of data it’s

collecting and making available. Rather, technology has created a world wherepeople are generating massive amounts of data simply by living their lives, and companies are generating massive amounts of data simply by going about their business. Clearly, this is already the case in advanced economies. As the costs of key technologies continue to plummet, this will increasingly be the case in emerging economies as well. And, when that happens, we’ll see more companies digging intothe data and emerging with the confidence to seize new opportunities in these

markets. The result? Economic development in places where you’d least expect it.

Name: Bilal Alhayani

Designation: Yildiz Technical University, Turkey

Biography: Bilal  AL-Hayani   received the B.Sc. degree in Laser Engineering from university of Technology –Baghdad, Iraq in 1999 to 2004 , and the M.Sc. degree in  electronics and telecommunication engineering and electromagnetic  from University of Pune ,India  from 2011 to 2013 he was  joined  in Ph.D . researcher in 2014 in Electronics and communication Department in  Yildiz Technical University ,Istanbul, Turkey . His general research interests lie in signal processing and communication theory  and signal coding  on wireless communication and  image processing specific  research  area include cooperative communication techniques He published  a lot off  original articles  of  papers  research in Sci  (science citation index expanded ) journals index and in scopus in SAGE and ASP publisher.

Title of the talk: Establishment Of Body Auto Fitting Model BAFM Using NJ-GPM At Toyota

Abstract: The wireless sensor network technologies are considered as one of the key research areas in computer science and healthcare application industries. The pervasive healthcare systems The Internet of Medical Things (MIoT) is an amalgamation of wearable medical devices and mobile applications that can connect to healthcare sensors  information technology systems using networking technologies. (MIoT)  The impact of IoT in healthcare, although still in its initial stages of development has been significant , attempts to review and understand the applications of MIoT in personalised healthcare will promote personalized care and higher standards of care and living through individual data-driven treatment regimens as well as optimized devices tailored to individual physiological requirements. The IoMT senses the patients’ health status and then transfers the medical data to doctors and healthcare providers with the help of remote cloud data centres. These data are used for disease diagnosis and in medical care decision-making. The main challenge in MIo is how to manage the large amount of medical data. Potential topics include but are not limited to:IoT measurement and control And new methodology of optical sensor network for detection.

  • High-level methods and tools for node and application design healthcare sensors
  • The physical environment, both local and remote sensors.
  • Biomedical sensors network and and identification of biological and chemical agents
  • Image transmission using in healthcare sensors.


Designation: EMOJ, Italy

Biography: Luca Giraldi is CEO of EMOJ, offering advanced technologies based on Artificial Intelligence techniques to revolutionize the world of Customer Experience. Its motto is “we are an artificial intelligence company, but we put human at first before artificial”. EMOJ operates in the field of automotive, retail and culture and is considered by Unicredit Startlab and Bocconi among the 10 successful Italian startups. Luca received a PhD in Industrial Engineering in 2019 and he is expert of digital transformation, customer experience, emphatic marketing.


Abstract: Nowadays, driver monitoring is a topic of paramount importance, because distraction and inattention are a relevant safety concern crashes .Currently, Driver Monitoring Systems collect and process dynamic data from embedded sensors in the vehicle and RGB-D cameras detecting visual distraction and drowsiness, but neglect the driver’s emotional states, despite research demonstrated precisely that emotion and attention are linked, and both have an impact on performances For instance negative emotions can alter perception and decision-making

 .Consequently, explaining complicated phenomena such as the effects of emotions on driving and conceiving how to use emotions to decrease distraction need to be explored. Today several methods and technologies allow the recognition of human emotions, which differ in level of intrusiveness. Invasive instruments based on biofeedback sensors can affect the subjects’ behaviour and the experienced emotions. Non-intrusive emotion recognition systems, i.e., those based on speech recognition analysis and facial emotion analysis, implement Convolutional Neural Networks (CNN) for signal processing. However, no study has actually tested their effectiveness in a motor vehicle to enable emotion regulation. In this context the research focuses on the introduction of a multimedia sensing network and an affective intelligent interface able to monitor the emotional state and degree of driver’s attention by analysing the persons’ facial expressions and map the recognized emotions with the car interface in a responsive way to increase human wellbeing and safety. The adopted technology for emotion recognition is reported in and is extended with additional features specific for vehicle environment to implement emotion regulation strategies in the human-machine interface to improve driving safety. The result is an emotion-aware interface able to detect and monitor human emotions and to react in case of hazard, e.g. providing warning and proper stimuli for emotion regulation and even acting on the dynamics of the vehicle.

Name: Jemili Farah

Designation: University of Sousse, TUNISIA

Biography: Farah JEMILI had the Engineer degree in Computer Science in 2002 and the Ph.D degree in 2010. She is currently Assistant Professor at Higher Institute of Computer Science and Telecom of Hammam Sousse (ISITCOM), University of Sousse, Tunisia. She is a senior Researcher at MARS Laboratory (ISITCOM –Tunisia). Her research interests include Artificial Intelligence, Cyber Security, Big Data Analysis, Cloud Computing and Distributed Systems. She served as reviewer for many international conferences and journals. She has many publications; 6 book chapters, 6 journal publications and more than 20 conference papers.

Title of the talk: Deep Learning for Intrusion Detection

Abstract: In recent years, the world has seen a significant evolution in the different areas of connected technologies such as smart grids, the Internet of vehicles, long-term evolution, and 5G communication. By 2023, it is expected that the number of IP-connected devices will be three times larger than the global population, and the total number of DDoS attacks will double from 7.9 million in 2018 to 15.4 million by 2023 as reported by Cisco.As of 2020, the amount of data generated each day is exceeding petabytes and this includes the traces that internet users make when they access a website, mobile application or a network. 

This growth gave more space to hackers to launch their malicious attacks and use development techniques and tools for intrusion. Intrusion detection systems (IDSs) are one of the most important systems used in cyber security. Intrusion detection systems are the hardware or software that monitors and analyzes data flowing through computers and networks to detect security breaches that threaten confidentiality, integrity and availability of a system’s resources. According to IBM, the costs of data breaches have increased from $3.86 million to $4.24 million in 2021. This is a loss that no business would be able to sustain. That’s why, as Forbes reports, 83% of enterprise workloads will move to the cloud by 2020, making it necessary to develop new and efficient IDS.Deep learning for intrusion detection is one of the hot topics in recent academic research. With the improvement of computing power and the rapid growth of the volume of data, the development of deep learning have attracted people’s attention again, so that the practicality and popularity of deep learning have greatly improved.Deep learning is an advanced branch of machine learning that uses multi layer networks. The layers are connected by neurons, which represent the mathematical computation of the learning processes.Intrusion detection has been widely studied in both industry and academia, but cybersecurity analysts always want more accuracy and global threat analysis to secure their systems in cyberspace. Big data represents the great challenge of intrusion detection systems, making it hard to monitor and analyze this large volume of data using traditional techniques. Recently, deep learning has been emerged as a new approach which enables the use of Big Data with a low training time and high accuracy rate. This contribution proposes an approach of an IDS based on cloud computing and the integration of big data and deep learning techniques to detect different attacks as early as possible. To demonstrate the efficacy of this system, the proposed system is implemented within Microsoft Azure Cloud, as it provides both processing power and storage capabilities, using a convolutional neural network (CNN–IDS) with the distributed computing environment Apache Spark, integrated with Keras Deep Learning Library. The proposed approach includes the pre-processing of data and deep learning. The experimental results show effectiveness of the approach in terms of accuracy and detection rate due to the integration of deep learning technique and Apache Spark engine.

Name: Ghazal Azarfar

Designation: University of Saskatchewan, Canada

Biography: Data Scientist · Machine Learning Engineer · Applied Scientist · Associate Researcher · Computer Vision Engineer

Title of the talk: Deep Learning to Estimate Age from Chest CT Scans

Abstract: Purpose The objective of our study was to estimate a patient’s age from a chest CT scan and assess whether the CT estimated age is a better predictor of lung cancer risk than chronological age.Methods Composite images were created to develop an age prediction model based on Inception-ResNet-v2. We used 13824 chest CT scans from the National Lung Screening Trial (NLST) for training (91%), validation (5%), and testing (4%). We independently tested the model using 1849 CT scans collected in Saskatoon, Canada. We then assessed the CT estimated age as a risk factor for lung cancer screening using the NLST dataset. We calculated a relative lung cancer risk between two groups; group 1: those who are assigned a CT age older than their chronological age, and group 2: those who are assigned a CT age younger than their chronological age.Results Comparing the chronological age with the estimated CT age resulted in a mean absolute error of 1.91 years, and Pearson’s correlation coefficient of 0.9. The area associated with the lungs seemed to be the most activated region in the age estimation model. The relative lung cancer risk of 1.8 (95% confidence level) was calculated between the two groups, indicating a positive association between having an older Chest CT age (than chronological age) and having lung cancer.Conclusion Our results show that CT estimated age may be a better predictor of lung cancer than chronological age.


Designation: Interdisciplinary Analyst at EthicsNet, USA.

Biography: Adam Alonzi is a futurist, writer, biotechnologist, programmer, and documentary maker. He is an interdisciplinary analyst for EthicsNet, a nonprofit building a community with the purpose of co-creating a dataset for machine ethics algorithms. He also serves as the Head of New Media at BioViva Science, as an analyst for the Millennium Project, and as a consultant for a number of technology startups.


Abstract: Outcomes on the macro level often cannot be accurately extrapolated from the microbehaviors of individual agents. The interdependence of complex system’s components makes simulation a viable option for exploring cause and effect relationships within it (Miller and Page, 2009). Chaos theory emphasizes the sensitivity of such networks to starting conditions (Boccaletti, 2000), which strongly suggests thought should be put into the architecture of an AGI “society” before it begins to take shape. Protocols for emergency interventions should certainly be in place, but the network itself should be robust enough from the beginning  to handle sudden deviations from basic ethical precepts by one or more of its members.Outside of its context, and without any information about the parts to which it is connected, a cell or leaf or animal  can be studied, but not understood in a meaningful way (Mitchell, 2009). Creating moral agents in a hyperconnected world will involve modeling their interactions with entities like and unlike themselves in the face of both predictable and unforeseen events. This will be helpful as groups can behave differently than their individual parts (Schelling, 1969) Keeping AI friendly does not end with giving each AI a set of maxims before letting them loose, but satisfactorily explicating upon the emergent phenomena that arise from the interactions of similarly or differently “educated” machines. Because of the near certainty that synthetic intelligences will communicate rapidly and regularly, it is imperative that thought leaders in AI safety begin thinking about how groups of artificially intelligent agents will behave.


Designation: Interdisciplinary Analyst at EthicsNet, USA.

Biography: Dr. Qamar Wali has been working as Assistant Professor of Physics at National University of Technology since September 2018. He has earned PhD in Advanced Materials from Universiti Malaysia Pahang in 2016 and currently involved on renewable energy technology research, particularly perovskite solar cells. He has published more than 30 research articles so far in world renowned journals with citations > 1300. He has also obtained a Malaysian Patent on the topic of ‘multichannel nanotubular metal oxide’ and also is a recipient of the award of Research Fund for International Young Scientists (RFIS-I), funded by National Natural Science Foundation of China. During the teaching, he has successfully implemented the OBE system for Applied Physics in different engineering programs. He has been involved in a project “Third Generation Photovoltaics for Building Integration: A Smart and Sustainable Energy Solution” with US-PAK Center for Energy at University of Engineering and Technology, Peshawar, Pakistan.

Title of the talk: Semi-transparent solar panels for smart buildings

Abstract: Considering the ongoing energy crises in developing countries such as Pakistan, fast, reliable, simple, and less time-consuming as well as cost-effective methods of producing electricity are required to replace the complicated procedures of generating electricity using conventional methods. Global demand for energy is increasing rapidly, because of population and economic growth, especially in emerging market economies. Cities consume more than two-thirds of the world’s energy resources and are responsible for around the same share of CO2 emissions. Buildings alone are responsible for 36% of global energy consumption and nearly 40% of total direct and indirect CO2 emissions. At the current pace, the global energy use in buildings could double or even triple by 2050, as the world’s population living in cities is projected to increase further in the next decades. As per new policy, all new buildings that will be occupied by public authorities should be nearly zero energy rated. This means that new buildings must generate their own energy from renewable sources and not be wholly reliant on traditional grid-based forms of fossil fuel related energy. Integrating photovoltaics in buildings represents a feasible solution towards energy efficient buildings and in order to achieve sustainable goals in cities, harvesting the full potential of the building (facades, windows) for renewable energy generation is required. Building integrated photovoltaics (BIPV) potential to integrate into the building envelope holds aesthetic appeal for architects, builders, and property owners and is a market sector that is expected to grow dramatically over the next 5–10 years. Among all existing photovoltaic technologies, 3rd generation solar cells have attracted substantial interest in BIPV due to reduced cost and possess a number of key advantages, including low weight, aesthetic value for architects and can be printed in any pattern making relatively low cost BIPVs, without significant compromise in efficiency. Keywords: Solar energy material; Semi-transparent devices; nanomaterials; light weight; flexibility;