< PreviousNEWS 10 Edge/ September 2024 The UAE’s global scientifi c research centre, the Technology Innovation Institute (TII), is the applied research pillar of Abu Dhabi’s Advanced Technology Research Council (ATRC). It has released a new large language model (LLM) in its Falcon series, the Falcon Mamba 7B. Independently verifi ed by Hugging Face, the new model is the numero uno globally performing open- source State Space Language Model (SSLM). H.E. Faisal Al Bannai, Secretary General of ATRC and Advisor to the UAE President for Strategic Research and Advanced Technology Aff airs, said: “The Falcon Mamba 7B marks TII’s fourth consecutive top-ranked AI model, reinforcing Abu Dhabi as a global hub for AI research and development. This achievement highlights the UAE’s unwavering commitment to innovation.” This is Falcon’s fi rst SSLM, and it deviates from the previous Falcon models, which used a transformer-based architecture. For transformer architecture models, Falcon Mamba 7B outperforms Meta’s Llama 3.1 8B, Llama 3 8B, and Mistral’s 7B on the newly introduced benchmarks from Hugging Face. Meanwhile, for the other SSLMs, Falcon Mamba 7B beats all other open-source models in the old benchmarks, and it will be the fi rst model on HuggingFace’s new tougher benchmark leaderboard. Dr. Najwa Aaraj, CEO of TII, said, “The TII continues to push the boundaries of technology with its Falcon series of AI models. The Falcon Mamba 7B represents true pioneering work and paves the way for future AI innovations that will enhance human capabilities and improve lives.” State Space models are extremely performant at understanding complex situations that evolve, such as a whole book. This is because SSLMs do not require additional memory to digest such large bits of information. On the other hand, transformer-based models are very effi cient at remembering and using information they have processed UAE’S TII UNVEILS FALCON MAMBA 7B: THE WORLD’S TOP SSLM MODEL The Falcon Mamba 7B, an AI model has been independently verifi ed as the world’s best in its class, further establishing Abu Dhabi as a hub for research and innovation earlier in a sequence. This makes them very good at tasks like content generation. However, because they compare every word with every other word, this requires signifi cant computational power. SSLMs can be applied in various fi elds, such as estimation, forecasting, and control tasks. Like the transformer architecture models, they excel in Natural Language Processing (NLP) tasks and can be applied to machine translation, text summarisation, computer vision, and audio processing. Dr. Hakim Hacid, Acting Chief Researcher of the TII’s AI Cross-Center Unit, said: “As we introduce the Falcon Mamba 7B, I’m proud of the collaborative ecosystem of TII that nurtured its development. This release represents a signifi cant stride forward, inspiring fresh perspectives and further fueling the quest for intelligent systems. At TII, we’re pushing the boundaries of both SSLM and transformer models to spark further innovation in generative AI.” Edge_Sep2024_10-14_News_13370427.indd 10Edge_Sep2024_10-14_News_13370427.indd 1002/09/2024 19:3002/09/2024 19:30NEWS Edge/ September 2024 11 DAMAC Group’s investment arm, DAMAC Capital, has announced a strong ROI on its strategic investments in SpaceX and Stripe. The announcement comes after the Group announced its investment in the AI sector. Some key investments by DAMAC Capital include the AI startup Anthropic, xAI, founded by Elon Musk, and the French AI company Mistral, recognised for its European large language model (LLM) open-source capabilities. In a note shared by the company, DAMAC Group Founder Hussain Sajwani stated that the company’s investments in SpaceX and Stripe signify a strategic alignment with companies at the edge of technological advancement and innovation. Sajwani added, “These investments are a testament to DAMAC Capital’s commitment to diversifying our portfolio and supporting enterprises that have the potential to create transformative impacts on a global scale.” Since the COVID-19 pandemic, DAMAC has actively investigated the space tech industry. The company stated that Elon Musk’s SpaceX is at the forefront of space exploration and technology, revolutionising space travel and connectivity. Meanwhile, the Group welcomed a positive ROI on Stripe within a year of its investment. Stripe, a leading online payment processing platform, enables businesses worldwide to conduct seamless online transactions. The company stated DAMAC Capital’s investment in SpaceX aligns with its vision to support groundbreaking ventures that redefi ne industries. With projects like Starship and Starlink, SpaceX aims to make space more accessible and improve global internet coverage, directly contributing to technological advancements and DAMAC CAPITAL ANNOUNCES STRONG ROI ON ITS SPACEX AND STRIPE INVESTMENTS The company stated DAMAC Capital’s investment in SpaceX aligns with its vision to support groundbreaking ventures that redefi ne industries economic development. Since 2015, over $47 billion of private capital has been invested in the global space sector, growing 21 per cent annually on average. As space-enabled technologies advance, the space economy is expected to reach $1.8 trillion by 2035. A new report from the World Economic Forum, “Space: The $1.8 Trillion Opportunity for Global Economic Growth,” developed in collaboration with McKinsey & Company, outlines key developments that will shape space and adjacent industries throughout the next decade. DAMAC Capital’s investment in Stripe refl ects its strategic focus on fi ntech innovations that transform financial services. By investing in Stripe, DAMAC Capital supports the acceleration of digital commerce and fi nancial inclusion, fostering a more connected and effi cient global economy. Edge_Sep2024_10-14_News_13370427.indd 11Edge_Sep2024_10-14_News_13370427.indd 1102/09/2024 19:3002/09/2024 19:30NEWS 12 Edge/ September 2024 Elon Musk has accused OpenAI Inc. and its leaders, Samuel (Sam) Altman and Greg Brockman, of deceiving him into co-founding the non-profi t venture and subsequently stripping it of its valuable technology and assets. Musk’s complaint, fi led on Monday in the US District Court for the Northern District of California, is part of an ongoing dispute with Altman over OpenAI’s original mission. He claims that OpenAI, Altman, and Brockman “intentionally courted and deceived” him to secure funding, exploiting his “humanitarian concern” about the existential dangers of artifi cial intelligence (AI).” This is the second lawsuit Musk has fi led against Altman and OpenAI. In June, he voluntarily withdrew a similar suit in California state court, which accused Altman of disregarding OpenAI’s charter to ensure that artifi cial intelligence benefi ts “all of humanity” while profi ting from multibillion- dollar investments from Microsoft Corp. Musk has also accused OpenAI of becoming a “de facto subsidiary” of Microsoft. Musk withdrew his state complaint in June, just a day before a California judge was set to hear OpenAI’s request for dismissal. At that time, Musk had raised $6 billion for his AI company, positioning it as an alternative to OpenAI. The latter argued that Musk intended to use pretrial fact-fi nding and information sharing, known as discovery, in his state case to gain access to its “proprietary records and technology.” After the fi rst lawsuit, OpenAI contended that Musk had previously supported its plans to become a for-profi t business and insisted it raise “billions.” ChatGpt’s parent company has denied any founding agreement that it breached or any promise to make its technology open-source, ELON MUSK ACCUSES OPENAI FOUNDERS OF FRAUD AND BREACH OF TRUST Tesla’s Chief has launched a new lawsuit against OpenAI and its leaders, Sam Altman, and Greg Brockman, accusing them of deceiving him into co-founding the non-profi t as Musk has claimed. In the federal complaint, Musk alleges that after he invested tens of millions of dollars in OpenAI, Altman “fl ipped the narrative” about humanitarian concern and “proceeded to cash in” by partnering with Microsoft. This partnership enabled Altman to establish several for-profit affiliates of OpenAI, which took control of its board and “systematically drained the non-profi t of its valuable technology and personnel,” the lawsuit states. The complaint notes that the OpenAI network was recently valued at $100 billion. Altman and Brockman are accused of fraud for promising that OpenAI would be dedicated to humanitarian AI uses and then converting the company into a for-profi t entity. The defendants allegedly committed constructive fraud by persuading Musk to make charitable contributions to OpenAI. Edge_Sep2024_10-14_News_13370427.indd 12Edge_Sep2024_10-14_News_13370427.indd 1203/09/2024 17:0203/09/2024 17:02NEWS Edge/ September 2024 13 Saudi Arabia’s Roads General Authority has announced that the Saudi Road Code represents a signifi cant milestone in the Kingdom’s eff orts to enhance its road network. By standardising essential protocols and policies across the road sector, the code is designed to future-proof infrastructure, particularly about the requirements of self-driving vehicles. This initiative positions Saudi Arabia at the forefront of nations adopting modern technologies to advance their infrastructure. The offi cial press release underscores that the code optimises road quality, safety, and security while boosting economic effi ciency and sustainability. A key focus of the code is the accommodation of self- driving vehicles. To this end, the code mandates the installation of smart communication devices along roadways, which will interact directly with autonomous vehicles via advanced networks. These devices provide real-time data on road conditions, enabling safer driving decisions and improving traffi c fl ow. The Saudi Road Code serves as a comprehensive technical guide for all road authorities throughout Saudi Arabia. It equips these entities with the necessary SAUDI ROAD CODE SETS THE STANDARD FOR AUTONOMOUS VEHICLE INTEGRATION The Kingdom’s latest Road Code establishes Saudi Arabia as a leader in modern infrastructure development, prioritising road quality, safety, and the integration of autonomous vehicles information to eff ectively plan, design, construct, operate, and maintain all types of roads. The co de al so inco rp or ates environmental considerations and the specifi c needs of self-driving vehicles. By providing guidelines, drawings, procedures, and checklists, it aims to ensure the highest road quality standards, safety, security, economic effi ciency, and sustainability across the Kingdom. Currently in its testing phase, the code will become binding on all government agencies in early 2025, following the conclusion of the testing period at the end of 2024. Edge_Sep2024_10-14_News_13370427.indd 13Edge_Sep2024_10-14_News_13370427.indd 1303/09/2024 17:0203/09/2024 17:02NEWS 14 Edge/ September 2024 Ooredoo, the Qatar-based telecommunications operators has allocated close to Qatari Riyal 2.8 million to Hamad Bin Khalifa University’s Qatar Centre for Quantum Computing (QC2). The funding supports QC2’s project to establish the country’s fi rst quantum communication testbed. The quantum communication link will leverage quantum cryptography and off er 100 per cent security that surpasses the capabilities of traditional communication methods. Unlike classical communication links, which depend on cryptographic algorithms that can be compromised over time, quantum cryptography off ers an unmatched level of security, making it an ideal solution for protecting communication infrastructure. The testbed will be critical in developing Qatar’s future quantum networks, providing the foundation for the next-generation Quantum Internet. Dr. Saif Al-Kuwari, Director of the Qatar Centre for Quantum Computing, underscored the importance of Ooredoo’s involvement, stating that it lays the groundwork for the future of secure communication in Qatar and cements the nation’s leadership in the development and implementation of this transformative technology. This quantum communication link will leverage quantum cryptography, off ering 100 per cent security that surpasses the capabilities of traditional communication methods. The testbed will be a foundational element in Qatar’s future quantum networks, OOREDOO INVESTS QR 2.8M IN QATAR’S QUANTUM FUTURE WITH HBKU COLLABORATION With a QR 2.8 million investment, Ooredoo supports HBKU’s eff orts to build Qatar’s fi rst quantum communication link, advancing the nation’s secure communication capabilities laying the groundwork for the next- generation Quantum Internet. Dr. Saif Al-Kuwari, Director of QC2, acknowledged Ooredoo’s support, highlighting that it paves the way for the future of secure communication in Qatar and strengthens the country’s position as a regional leader in this cutting-edge technology. The partnership between Ooredoo and Hamad Bin Khalifa University’s (HBKU) Qatar Centre for Quantum Computing (QC2) represents a signifi cant advancement in the region’s quantum communication technology. The initiative focuses on constructing Qatar’s first quantum communication testbed, which will utilise quantum cryptography to achieve unparalleled security. Edge_Sep2024_10-14_News_13370427.indd 14Edge_Sep2024_10-14_News_13370427.indd 1402/09/2024 19:3102/09/2024 19:31FEATURE Edge/ September 2024 15 ADAPTING TO CHANGE Business Impact of The EU AI Act AI Words by Sindhu V Kashyap Earlier this month marked the offi cial coming into force of the EU AI Act, a pioneering eff ort to regulate artifi cial intelligence and ensure its safe, transparent, and ethical deployment across various sectors. This legislation is designed to create a comprehensive regulatory framework for AI systems within the EU, focusing on ensuring safety, transparency, and trustworthiness. It aims to prevent AI systems from engaging in discriminatory practices and promotes developing and using env i ro nment a ll y sus t ainable A I technologies. The act applies to all providers and users of AI systems operating within the EU market, including those outside the market, if their systems impact users of the market. The AI Act’s implementation is phased to allow adequate preparation and compliance. Six months after the act enters into force, the ban on AI systems posing unacceptable risks will be enacted. Codes of practice will apply nine months after the act enters into force, and rules for general-purpose AI systems, including transparency requirements, will apply twelve months after that. High-risk systems will have more time to comply, with obligations becoming applicable thirty-six months after the act enters into force. “The EU AI Act marks a signifi cant milestone in the regulation of artifi cial intelligence and will inevitably shape how companies developing and implementing AI will approach the technology,” said Matt Cloke, CTO at Endava. “Its comprehensive framework, which uses a risk-based categor y system, emphasises the need for companies to prioritise safety, Edge_Sep2024_15-17_The EU AI Act_13376374.indd 15Edge_Sep2024_15-17_The EU AI Act_13376374.indd 1503/09/2024 16:1903/09/2024 16:19FEATURE 16 Edge/ September 2024 transparency, and ethical considerations in their AI projects.” One of the core principles of the AI Act is its risk-based approach, classifying AI systems into diff erent risk categories with corresponding regulatory requirements. AI systems that pose an unacceptable risk to people’s safety and rights are banned outright. This includes AI systems that manipulate human behaviour through subliminal techniques, exploit vulnerabilities of specifi c groups, engage in social scoring, or employ real-time remote biometric identifi cation in public spaces. These practices are considered too dangerous and harmful, and their use is strictly prohibited, with limited exceptions for law enforcement in severe cases, subject to court approval. Jacob Beswick, Direc tor of AI Governance at Dataiku, remarked, “Given its extraterritorial application, many businesses will be preparing to comply with the new rules to continue operations within the EU. As one of the most comprehensive pieces of AI regulation to be passed to date, preparing for compliance is both a step into the unknown and an interesting bellwether as to what might come in terms of AI-specifi c regulatory obligations across the globe.” High-risk AI systems significantly impact people’s safety or fundamental rights and are subject to stringent regulations. They are divided into two categories: AI systems used as safety components in products regulated by EU safety legislation (such as medical devices, toys, aviation, and cars) and AI systems used in specifi c high-risk areas such as critical infrastructure management, education, employment, essential public and private services, law enforcement, migration, and legal interpretation. Providers of high-risk AI systems must conduct thorough assessments before these systems can be marketed, ensuring compliance with robust safety, transparency, and oversight requirements. Cloke fur ther noted, “This new regulation should encourage companies to take a closer look at the data they use, ensuring that it is of high quality. By Edge_Sep2024_15-17_The EU AI Act_13376374.indd 16Edge_Sep2024_15-17_The EU AI Act_13376374.indd 1602/09/2024 19:4002/09/2024 19:40FEATURE Edge/ September 2024 17 imposing strict requirements, particularly on high-risk AI systems, the Act protects consumers’ rights to protection. It fosters a culture of responsibility and accountability across industries that have adopted the technology.” AI systems that interact with humans or generate content, such as chatbots and deepfakes, are considered to pose limited risks. These systems must adhere to transparency obligations, such as informing users when interacting with an AI system or when content is AI- generated. GenAI models, like ChatGPT, must disclose AI-generated content, prevent the generation of illegal content, and publish summaries of the copyrighted data used for training. High-impact general- purpose AI models, like GPT-4, must undergo thorough evaluations, and any severe incidents must be reported to the European Commission. “As a step towards EU AI Ac t compliance readiness, businesses should extend their understanding of where they are deploying AI systems to the intended purpose of these systems, the technologies used (e.g., generative AI), and where these systems fall in terms of the risk tiering established in the EU AI Act,” adviced Beswick. “Determining exposure to future compliance obligations will enable businesses to begin taking action to mitigate the risk of non-compliance and avoid disruptions to business operations, whether through f ines or pulling operational systems from the market.” AI systems posing minimal risk, such as spam fi lters, are not subject to specifi c regulatory requirements beyond existing legislation. This category covers most AI applications, ensuring unnecessary regulation does not stifl e innovation. One of the most notable aspects of the EU AI Act is its extra-territorial eff ect. “In other words, the act not only applies to AI systems developed within the EU but also to those off ered to its customers or aff ecting its citizens, regardless of where the providers are located,” Cloke explained. “AI developers and providers outside of the EU must also adhere to these regulations to operate within the European market. The EU AI Act off ers both a challenge and an opportunity for these companies. While the compliance requirements may initially seem daunting, they also present a chance to diff erentiate themselves by adopting best practices in AI governance.” The AI Act balances regulation with support for innovation, particularly for start-ups and small and medium-sized enterprises (SMEs). It mandates that national authorities provide testing environments that simulate real-world conditions, allowing companies to develop and train AI models before public release. Beswick emphasised the importance of preparedness, stating, “With the European Commission proposed the AI Act in April 2021 and reached a political agreement in December 2023. Given its extraterritorial application, many businesses will be preparing to comply with the new rules to continue operations within the EU countdown now star ting until the regulation fully applies, there are several steps businesses should be taking over the next 18 or so months to ensure they are prepared. First, businesses should take stock of their AI assets and review what AI systems are operationalised within Europe. Once businesses have a full overview of where their AI assets are and where they are operating, they should move on to qualifying these assets.” “The emphasis on transparency and human oversight over the technology, which this act brings, aligns with growing public and consumer expectations around ethical use,” Cloke added. “As the EU sets a global benchmark for AI regulation, companies that adapt to these standards early on will be better positioned to gain trust and credibility in the market.” The European Commission proposed the AI Act in April 2021 and reached a political agreement in December 2023. The European Parliament adopted it in March 2024 and the Council in May 2024. The act must now be published in the EU’s Offi cial Journal before it can enter into force. This regulatory framework sets a precedent for global AI governance, ensuring that AI technologies can develop within a structured environment prioritising safety, ethics, and innovation. The AI Act seeks to mitigate potential harms by adopting a risk-based approach while fostering a vibrant and innovative AI ecosystem in the EU. Edge_Sep2024_15-17_The EU AI Act_13376374.indd 17Edge_Sep2024_15-17_The EU AI Act_13376374.indd 1702/09/2024 19:4102/09/2024 19:41FEATURE 18 Edge/ September 2024 Dr. Mohammed Hamad Al - Kuwaiti, Chairman of the UAE Cybersecurity Council, announced the development of three new cybersecurity policies to be issued by the end of 2024. These policies aim to bolster the nation’s standing as a global hub for advanced technology and artifi cial intelligence (AI). In an interview with the Emirates News Agency (WAM), Dr Al-Kuwaiti outlined the upcoming policies: “cloud computing and data security,” “Internet of Things security,” and “cybersecurity operations centres.” Additionally, he mentioned that the executive regulations for the “encryption” law, which will establish key standards for data transmission security in line with quantum systems, are expected to be fi nalised before the end of the year. Al-Kuwaiti emphasised the UAE’s potential to become a global data hub, attributing this to the country’s robust capabilities and resources. He highlighted the UAE’s commitment to enacting laws and policies that regulate this strategic sector and foster regional and international partnerships across public and private sectors. He noted that amid rapid advancements in technology and AI, the UAE serves as a model for other nations striving to NEW CYBERSECURITY MEASURES IN UAE Addresses Quantum Encryption, IoT Security, and More Words by Sindhu V Kashyap AI Edge_Sep2024_18-19_UAE Cyber security_13376723.indd 18Edge_Sep2024_18-19_UAE Cyber security_13376723.indd 1802/09/2024 19:4102/09/2024 19:41FEATURE Edge/ September 2024 19 enhance their cybersecurity frameworks, especially within the data sector. Al-Kuwaiti pointed out that the UAE’s digital transformation spans multiple sec tors—including health, energy, education, and aviation—intensifying the need for a sophisticated cybersecurity system to shield cyberspace from potential threats. He underscored the signifi cance of protecting institutions from cyber threats that could result in data breaches, identity thef t, intellectual property violations, and compromises of critical infrastructure. He highlighted the UAE’s resilience against malicious cyberattacks targeting strategic sec tors, mainly f inancial services. These attacks, he said, aim to undermine national security or extract fi nancial information for illicit purposes. Al-Kuwaiti affi rmed that the UAE’s cybersecurity infrastructure remains robust. It continuously repels and neutralises such threats, identifi es perpetrators, and adheres to the highest international standards in dealing with cyber criminals. W i t h t h e g row i n g u se of A I , cybersecurity has become an important must-have for all organisations. This includes implementing stringent quality assurance practices, regular audits, and continuous monitoring of software and vendors. I t has become impor tant that organisations adopt a proac tive approach, ensuring that updates are tested in controlled environments before full deployment. Fostering a culture of collaboration a n d i n fo r m a t i o n s h a r i n g w i t h i n the cybersecurit y communit y can signifi cantly enhance collective defence mechanisms. Investing in training and development Al-Kuwaiti emphasised the UAE’s potential to become a global data hub, attributing this to the country’s robust capabilities and resources. He highlighted the UAE’s commitment to enacting laws and policies that regulate this strategic sector for IT and security professionals ensures they are equipped with the latest knowledge and skills to handle such challenges eff ectively. Organisations can better prepare for and mitigate the risks associated with software updates and outages by implementing rigorous quality assurance practices, fostering collaboration, and enhancing regulatory frameworks. With its unique ability to adapt quickly and learn from incidents, the UAE can set an example for other regions by establishing stringent software supply chain regulations. This proactive approach, combined with continuous improvement and learning, can signifi cantly enhance the resilience of critical infrastructures against future incidents. This measure by the UAE is a forward thinking one, to ensure that there are laws and policies to regulate the sector. Edge_Sep2024_18-19_UAE Cyber security_13376723.indd 19Edge_Sep2024_18-19_UAE Cyber security_13376723.indd 1903/09/2024 16:2103/09/2024 16:21Next >