Daily Tech Digest - August - 25, 2024

Never summon a power you can’t control

Traditionally, the term “AI” has been used as an acronym for artificial intelligence. But it is perhaps better to think of it as an acronym for alien intelligence. As AI evolves, it becomes less artificial (in the sense of depending on human designs) and more alien. Many people try to measure and even define AI using the metric of “human-level intelligence”, and there is a lively debate about when we can expect AI to reach it. This metric is deeply misleading. It is like defining and evaluating planes through the metric of “bird-level flight”. AI isn’t progressing towards human-level intelligence. It is evolving an alien type of intelligence. Even at the present moment, in the embryonic stage of the AI revolution, computers already make decisions about us – whether to give us a mortgage, to hire us for a job, to send us to prison. Meanwhile, generative AIs like GPT-4 already create new poems, stories and images. This trend will only increase and accelerate, making it more difficult to understand our own lives. Can we trust computer algorithms to make wise decisions and create a better world? That’s a much bigger gamble than trusting an enchanted broom to fetch wate


Artificial Intelligence: To regulate or not is no longer the question

First, existing laws have been amended to support the use of AI, thereby enabling the economy to benefit from broader AI adoption. The Copyright Act 2021, for example, has been amended to clarify that copyrighted material may be used for machine learning provided that the model developer had lawful access to the data. Amendments to the Personal Data Protection Act (PDPA) 2012 enabled the re-use of personal data to support research and business improvement, after model development using anonymised data proved to be inadequate. Detecting fraud, preserving the integrity of systems and ensuring physical security of premises are also recognised as legitimate interests for using personal data in AI systems. Second, regulatory guidance has been issued on how existing regulations that protect consumers will also apply to AI systems. The Personal Data Protection Commission has issued a set of advisory guidelines on how the PDPA 2012 will apply at different stages of model development and deployment whenever personal data is used. It also clarifies the level of transparency expected from organisations deploying AI systems and how they may disclose relevant information to boost consumer trust and confidence. 


When You're Building The Future The Past Is No Longer A Guide

Artificial Intelligence (AI) definitely has its place. But when it comes to these specific industrial and manufacturing challenges, it tends to be fundamental engineering and physics that generate the answers – number crunching and data processing in the extreme. That, in turn, means that the engineers working to deliver more detailed test results, more realistic prototypes, and run ever more fine-grained simulations turn to some of the most powerful high-performance computing systems to power their workloads. What might have counted as a system capable of High Performance Computing (HPC) a decade, or even a few years ago, can quickly run out of steam. Computational fluid dynamics (CFD) applications often use thousands of CPU cores, points out Gardinalli. But it’s not purely a question of throwing raw power – and dollars – at the issue. The real conundrum is how to map to a wide range of different domains which all require different underlying infrastructure. Finite element analysis (FEA), for example, focuses on working out how materials and structures will act under stress. It’s therefore critical to public infrastructure as well as to vehicle design and crash simulation. 


Top companies ground Microsoft Copilot over data governance concerns

Asked how many had grounded a Copilot implementation, Berkowitz said it was about half of them. Companies, he said, were turning off Copilot software or severely restricting its use. "Now, it's not an unsolvable problem," he added. "But you've got to have clean data and you've got to have clean security in order to get these systems to really work the way you anticipate. It's more than just flipping the switch." While AI software also has specific security concerns, Berkowitz said the issues he was hearing about had more to do with internal employee access to information that shouldn't be available to them.  Asked whether the situation is similar to the IT security challenge 15 years ago when Google introduced its Search Appliance to index corporate documents and make them available to employees, Berkowitz said: "It's exactly that." Companies like Fast and Attivio, where Berkowitz once worked, were among those that solved the enterprise search security problem by tying file authorization rights to search results. So how can companies make Copilots and related AI software work? "The biggest thing is observability and not from a data quality viewpoint, but from a realization viewpoint," said Berkowitz. 


Five incorrect assumptions about ISO 27001

We wish there were such a thing as an impenetrable cyber barrier. Unfortunately, there isn’t—not even at the highest levels. For any IT system to be effective, information must be sent and received from external sources. These days, vast amounts of data get copied and transferred every second, moving around the world at lightspeed. As a result, there are always multiple potential access points for criminals to get in. ISO 27001 – and any good cybersecurity strategy – can’t offer 100% protection against cyber threats. However, they can significantly mitigate the risks associated with these attacks. A correctly applied ISMS will make you more likely to keep any malware or bad actors out. ... ISO 27001 isn’t a one-time thing. Unfortunately, nothing is in information security – or business in general. The initial implementation is the most time-consuming aspect and may require the most significant financial investment. But once it’s in place, there’s no time to sit back and relax. Your staff will immediately switch focus to using pre-agreed KPIs to analyse your ISMS’s effectiveness, suggesting and making strategic adjustments as relevant.


How we’re using ‘chaos engineering’ to make cloud computing less vulnerable to cyber attacks

Chaos engineering involves deliberately introducing faults into a system and then measuring the results. This technique helps to identify and address potential vulnerabilities and weaknesses in a system’s design, architecture, and operational practices. Methods can include shutting down a service, injecting latency (a time lag in the way a system responds to a command) and errors, simulating cyberattacks, terminating processes or tasks, or simulating a change in the environment in which the system is working and in the way it’s configured. n recent experiments, we introduced faults into live cloud-based systems to understand how they behave under stressful scenarios, such as attacks or faults. By gradually increasing the intensity of these “fault injections”, we determined the system’s maximum stress point. ... Chaos engineering is a great tool for enhancing the performance of software systems. However, to achieve what we describe as “antifragility” – systems that could get stronger rather than weaker under stress and chaos – we need to integrate chaos testing with other tools that transform systems to become stronger under attack.


Six pillars for AI success: how the C-suite can drive results

Many AI and GenAI solutions have common patterns and benefit from reusable assets that can accelerate time to value and reduce costs. Without a control tower, different groups across an enterprise are at risk of building very similar things from scratch for various use cases. The control tower effectively has authority over where an organization will make its investments and create value by identifying patterns across the various use cases that align with business needs and prioritizing the development of GenAI solutions, for example. ... The truly transformative impact would be to entirely reimagine what you do in the front office, not just streamline the back office. GenAI unlocks new products, services and business models that are easy to overlook if you approach the technology with a robotic process automation mindset. That can include creating new products and features enabled through GenAI, equipping them with connectivity under pay-as-you-go service subscription models, selling them directly to consumers instead of through intermediaries, and leveraging the consumer data for insights and perhaps selling it as a separate revenue stream. 


Cyber Hygiene: The Constant Defense Against Evolving B2B Threats

By partnering with companies that provide early warnings about threats and scams when they see them independently, such as domain spoofing attempts, businesses can stay ahead of potential threats. “That’s an important control, and I strongly recommend it for any company,” Kenneally said, stressing the benefits of collaborative working partnerships. “It’s about ensuring that the controls are in place and that we are partnering with our customers to mitigate risks,” he added. This is particularly relevant given the increasing sophistication of phishing attempts, some of which may be assisted by artificial intelligence. Another aspect of Boost’s strategy is fostering a culture of resilience and agility within the organization. This involves continuous training and education, not just for the IT team but across the entire company. “Training is critical,” Kenneally said. ... As the cybersecurity landscape continues to evolve, the need for companies to protect their digital perimeter becomes more pressing. But while the threats may change, the fundamental principles of good cybersecurity — vigilance, education and proactive planning — remain constant.


I’ve got the genAI blues

Why is this happening? I’m not an AI developer, but I pay close attention to the field and see at least two major reasons they’re beginning to fail. The first is the quality of the content used to create the major LLMs has never been that good. Many include material from such “quality” websites as Twitter, Reddit, and 4Chan. As Google’s AI Overview showed earlier this year, the results can be dreadful. As MIT Technology Review noted, it came up with such poor quality answers as “users [shoud] add glue to pizza or eat at least one small rock a day, and that former US president Andrew Johnson earned university degrees between 1947 and 2012, despite dying in 1875.” Unless you glue rocks into your pizza, those are silly, harmless examples, but if you need the right answer, it’s another matter entirely. Take, for example, the lawyer whose legal paperwork included information from AI-made-up cases. The judges were not amused. If you want to sex chat with genAI tools, which appears to be one of the most popular uses for ChatGPT, accuracy probably doesn’t matter that much to you. Getting the right answers, though, is what matters to me and should matter to anyone who wants to use AI for business.

AI technology brings significant benefits to the Financial Services sector, including enhanced efficiency through automation, improved accuracy in risk assessments, personalised customer experiences via AI-driven insights and faster, more secure fraud detection. It also enables predictive analytics for better decision-making in areas like investment and lending. ... AI is there to support the employee – to elevate the human potential by delivering insights, knowledge and expedite results. However, challenges include the complexity of implementing AI systems, concerns around data privacy and security, regulatory compliance, and potential biases in AI models that can lead to unfair outcomes. Ensuring transparency and trust in AI decisions is also crucial for its broader acceptance in the sector. ... Trustworthy AI also ensures that compliance with regulations is maintained, risks are properly managed and ethical standards are upheld. In a sector where customer relationships are built on trust, any misstep could lead to reputational damage, financial loss, or regulatory penalties. 



Quote for the day:

“A dream doesn't become reality through magic; it takes sweat, determination, and hard work.” -- Colin Powell

No comments:

Post a Comment