Daily Tech Digest - February 20, 2024

How generative AI will benefit physical industries

To make generative AI’s potential a reality for a physical business, two crucial elements come into play: people and data. Investing in a highly skilled team is a given precondition for success with any business. Also critical is having a diversity of expertise, as well as a diversity of experiences, cultural touch points, and background. Drawing on this expertise and experience to inform how generative AI is developed allows more context to be built-in, and the models can be expanded to serve a global audience versus a regional or national one. Data quality in both edge computing and generative AI models is crucial. This is what has driven Motive to invest in a truly world-class annotations team. Because accuracy is so critical for the safety and optimization of our customers, this team ensures that the processes behind our use of generative AI are strong and consistent. These processes include ensuring the highest quality data and labels to train our models, and thus our products and services. At the same time, generative AI in the physical economy will only be as useful as the insights and capabilities it creates.


Do you need a larger project team?

There is plenty of anecdotal evidence in the industry where GCs have taken on data center projects in EU regions and have not fully understood the local resourcing requirements and supply chain logistics. In addition, they have incorrectly assumed that a UK labor force will be as effective as normal, when they are on rotational-based attendance in a regional project office. Instead, the solution may lie in developing smaller, fully supported, highly competent, highly motivated, and well-compensated teams capable of delivering increased outputs to realize your competitive potential – a theme also adopted by the World Quality Week in 2023. To meet the strong imperative for quick time-to-market in the industry within the context of an acute skills shortage, we argue that the solution lies in focusing on training people and empowering them with the capabilities of AI. Streamlined, lean teams with mature AI tools have a better chance of efficiently delivering on larger projects. Investment in training is crucial across the industry, particularly innovative approaches that enable smaller teams to achieve more thanks to AI assistance and other technological advancements.


Data Observability in the Cloud: Three Things to Look for in Holistic Data Platform Governance

To be truly meaningful in addressing the pain associated with data and AI pipelines, data observability tools must expand into FinOps. It’s no longer enough to know where a pipeline stalls or breaks -- data teams need to know how much the pipelines cost. In the cloud, inefficient performance drives up computing costs, which in turn drives up total costs. Tools must encompass FinOps to provide observability into costs pertaining to both infrastructure and computing resources, broken down by job, user, and project. They must also include advanced analytics to provide guidance on how to make individual pipelines cost-efficient. This will free up data teams to focus on strategic decision-making rather than spending their time reconfiguring pipelines for cost. ... To meet these demands, data observability solution vendors must offer custom products that allow customers to see on a platform-specific level such things as detailed cost visibility, efficient management of storage costs, chargeback/showback, and where the expensive projects, queries, and users lie.


Fundamentals of Functions and Relations for Software Quality Engineering

Effective testing is not just about covering every line of code. It's about understanding the underlying relationships. How do we effectively test the complex relationships in our software code? Understanding functions and relations proves an invaluable asset in this endeavor. ... It's worth noting that while all programs can be viewed as functions in a broad sense, not all are "pure" functions. Pure functions have no side effects, meaning they solely rely on their inputs to produce outputs without altering any external state. In contrast, many practical programs involve side effects, complicating their pure function interpretation. ... While functions provide clear input-output connections, not all relationships in software are so straightforward. Imagine tracking dependencies between tasks in a project management tool. Here, multiple tasks might relate to each other, forming a more complex network. ... Relations can sometimes group elements into equivalence classes, where elements within a class behave similarly. Testers can leverage this by testing one element in each class, assuming similar behavior for others, saving time and resources.


Your AI Girlfriend Is Cheating On You, Warns Mozilla

Mozilla said it could find only one chatbot that met its minimum security standards, with a worrying lack of transparency over how the intensely personal information that might be shared in such apps is protected. Almost two thirds of the apps didn’t reveal whether the data they collect is encrypted. Just under half of them permitted the use of weak passwords, with some even accepting a password as flimsy as “1”. More than half of the apps tested also failed to let users delete their personal data. One even claimed that “communication via the chatbot belongs to the software.” Mozilla also found the use of trackers—tiny pieces of code that gather information about your device and what you do on it— was widespread among the romantic chatbots. ... The main tip is not to say anything to the chatbot that you wouldn’t want friends or colleagues to discover, as the privacy of these services cannot be guaranteed. Also use a strong password, request that personal data is deleted once you’ve finished using the chatbot, opt out of having your data used to train AI models and don’t accept phone permissions that give the chatbot access to your location, camera, microphone or files on your device.


A Balanced Look at the Potential and Challenges of Popular LLMs

A beautiful symphony requires more than just individual talent. Ethical considerations like potential biases and misinformation risks demand attention. We must ensure responsible development, ensuring these LLMs don’t become instruments of discord but rather powerful tools for good. The potential for collaboration is even more exciting. Imagine Bard fact-checking Claude’s poems, or Qwen providing real-time data for GPT-3.5-Turbo-0613’s code generation. Such collaborations could lead to groundbreaking innovations, a true ensemble performance exceeding the capabilities of any single LLM. This is just the opening act of a much grander performance. As the music evolves, LLMs hold immense potential. Advancements in natural language understanding could enable nuanced conversations, personalized education could become a reality, and creative collaboration could reach unprecedented heights. This orchestra is just beginning its performance, and the future holds a symphony of possibilities waiting to be composed. In short, The key lies in understanding their technical nuances, recognizing their individual strengths, and fostering responsible development. 


Without contact prints or finger detail photos, how can an attacker hope to get any fingerprint data to enhance MasterPrint and DeepMasterPrint dictionary attack results on user fingerprints? One answer is as follows: the PrintListener paper says that “finger-swiping friction sounds can be captured by attackers online with a high possibility.” The source of the finger-swiping sounds can be popular apps like Discord, Skype, WeChat, FaceTime, etc. Any chatty app where users carelessly perform swiping actions on the screen while the device mic is live. Hence the side-channel attack name – PrintListener. ... To prove the theory, the scientists practically developed their attack research as PrintListener. In brief, PrintListener uses a series of algorithms for pre-processing the raw audio signals which are then used to generate targeted synthetics for PatternMasterPrint. Importantly, PrintListener went through extensive experiments “in real-world scenarios,” and, as mentioned in the intro, can facilitate successful partial fingerprint attacks in better than one in four cases, and complete fingerprint attacks in nearly one in ten cases.


ClickHouse: Scaling Log Management with Managed Services

A viable solution emerges in the merging of the advantages of open-source tools with the efficiency of managed services. This combination effectively addresses scalability and cost concerns, while upholding the operational efficiency required. Striking this balance between functionality, cost, and effort is particularly critical for teams constrained by budget and limited engineering resources. To illustrate this approach, consider specific log management strategies, such as the one implemented by DoubleCloud, which embody these principles. DoubleCloud, for instance, employs services like ClickHouse for data transfer and visualization, effectively managing substantial log volumes within a modest budget. ClickHouse is renowned for its efficient data compression techniques, serving as a prime example of how open source tools, when properly managed, can significantly enhance log management processes. This scenario provides a practical demonstration of how the integration of open source benefits with managed services can offer optimal solutions to the challenges previously discussed.


4 hidden risks of your enterprise cloud strategy

Cloud vendors themselves can encounter any number of business-related issues that can challenge their ability to provide service to the standard enterprise CIOs committed to when the contract was signed, including the introduction of new risks. ... Many enterprise IT executives see the cloud as delivering near-infinite scalability — something that is not mathematically true. This is not helped by cloud marketing, which strongly implies — if not outright promises — unlimited scalability. Most of the time, the cloud’s elasticity affords great levels of scalability for its tenets. When emergency strikes, however, all bets are off, says Charles Blauner, operating partner and CISO in residence at cybersecurity investment firm Team8, and former CISO for Citigroup, Deutsche Bank, and JP Morgan Chase. ... “CIOs believe that by using multiple cloud providers, they think that it is improving availability, but it’s not. All it’s doing is increasing complexity, and complexity has always been the enemy of security,” Winckless says. “It is far more cost-effective to use the cloud provider’s zones.” Enterprises also often fall short on the financial and efficiency benefits promised by the cloud because they are unwilling to trust the cloud environment’s mechanisms sufficiently — or so argues Rich Isenberg, a partner at consulting firm McKinsey who oversees their cybersecurity strategy practice.


Data Governance in the Era of Generative AI

GenAI accelerates trends already evident with traditional AI: the importance of data quality and privacy, growing focus on responsible and ethical AI, and the emergence of AI regulations. This will create both new challenges and opportunities for DG. ... Traditional DG processes provide a well-trodden path for proper management and usage of data across organizations: discover and classify data to identify critical/sensitive data; map the data to policies and other business context; manage data access and security; manage privacy and compliance; and monitor and report on effectiveness. Similarly, as DG frameworks expand to support AI governance, they have an important role to play across the GenAI/LLM value chain. ... Traditional AI/ML will continue to be critical for automating and scaling various DG processes. These include data classification; associating policy and business context with data; and detecting anomalies/issues and creating and applying data quality rules to fix them. Building on these capabilities, GenAI has the potential to turbocharge data democratization and drive dramatic gains in productivity for data teams.



Quote for the day:

"You may be good. You may even be better than everyone esle. But without a coach you will never be as good as you could be." -- Andy Stanley

Daily Tech Digest - February 19, 2024

Why artificial general intelligence lies beyond deep learning

Decision-making under deep uncertainty (DMDU) methods such as Robust Decision-Making may provide a conceptual framework to realize AGI reasoning over choices. DMDU methods analyze the vulnerability of potential alternative decisions across various future scenarios without requiring constant retraining on new data. They evaluate decisions by pinpointing critical factors common among those actions that fail to meet predetermined outcome criteria. The goal is to identify decisions that demonstrate robustness — the ability to perform well across diverse futures. While many deep learning approaches prioritize optimized solutions that may fail when faced with unforeseen challenges (such as optimized just-in-time supply systems did in the face of COVID-19), DMDU methods prize robust alternatives that may trade optimality for the ability to achieve acceptable outcomes across many environments. DMDU methods offer a valuable conceptual framework for developing AI that can navigate real-world uncertainties. Developing a fully autonomous vehicle (AV) could demonstrate the application of the proposed methodology. The challenge lies in navigating diverse and unpredictable real-world conditions, thus emulating human decision-making skills while driving. 


Bouncing back from a cyber attack

In the case of a cyber attack, the inconceivable has already happened – all you can do now is bounce back. The big picture issue is that too often IoT (internet of things) networks are filled with bad code, poor data practices, lack of governance, and underinvestment in secure digital infrastructure. Due to the popularity and growth of IoT, manufacturers of IoT devices spring up overnight promoting products that are often constructed using lower-quality components and firmware, which can have sometimes well-known vulnerabilities exposed due to poor design and production practices. These vulnerabilities are then introduced to a customer environment increasing risk and possibly remaining unidentified. So, there’s a lot of work to do, including creating visibility over deep, widely connected networks with a plethora of devices talking to each other. All too often, IT and OT networks run on the same flat network. For these organisations, many are planning segmentation projects, but they are complex and disruptive to implement, so in the meantime companies want to understand what's going on in these environments and minimise disruption in the event of an attack.


Diversity, Equity, and Inclusion for Continuity and Resilience

As continuity professionals, the average age tends to skew older, so how do we continue to bring new people to the fold to ensure they feel like they can learn and be respected in the industry? Students need to be made aware this is an industry they can step into. Unfortunately, many already have experience seeing active shooter drills as the norm. They may have never organized one, but they have participated in many of these drills in school. Why not take advantage of that experience for the students who are interested in this field? Taking their advice could make exercising like active shooter or weather events less traumatic. Listening to their experience – doing it for at least 13 years – gives them a lot of insight from even Millennials who grew up at the forefront of school shootings, but not actively exercising what to do if it happens while in school. These future colleagues’ insights could change how we do specific exercises and events to benefit everyone. Still, there must be openness to new and fresh ideas and treating them with validity instead pushing them off due to their age and experience. Similarly, people with disabilities have always been vocal about their needs. 


AI’s pivotal role in shaping the future of finance in 2024 and beyond

As AI becomes more embedded in the financial fabric, regulators are crafting a nuanced framework to ensure ethical AI use. The Reserve Bank of India (RBI) and the Securities and Exchange Board of India (SEBI) have initiated guidelines for responsible AI adoption, emphasising transparency, accountability, and fairness in algorithmic decision-making processes. While the benefits are palpable, challenges persist. The rapid pace of AI integration demands a strategic approach to ensure a safe, financial eco-system ... The evolving nature of jobs due to AI necessitates a concerted effort towards upskilling the workforce. A McKinsey Global Institute report indicates that approximately 46% of India’s workforce may undergo significant changes in their job profiles due to automation and AI. To address this, collaborative initiatives between the government, educational institutions, and the private sector are imperative to equip the workforce with the requisite skills for the future. ... The Reserve Bank of India (RBI) and the Securities and Exchange Board of India (SEBI) have recognised the need for ethical AI use in the financial sector. Establishing clear guidelines and frameworks for responsible AI governance is crucial. 


How to proactively prevent password-spray attacks on legacy email accounts

Often with an ISP it’s hard to determine the exact location from which a user is logging in. If they access from a cellphone, often that geographic IP address is in a major city many miles away from your location. In that case, you may wish to set up additional infrastructure to relay their access through a tunnel that is better protected and able to be examined. Don’t assume the bad guys will use a malicious IP address to announce they have arrived at your door. According to Microsoft, “Midnight Blizzard leveraged their initial access to identify and compromise a legacy test OAuth application that had elevated access to the Microsoft corporate environment. The actor created additional malicious OAuth applications.” The attackers then created a new user account to grant consent in the Microsoft corporate environment to the actor-controlled malicious OAuth applications. “The threat actor then used the legacy test OAuth application to grant them the Office 365 Exchange Online full_access_as_app role, which allows access to mailboxes.” This is where my concern pivots from Microsoft’s inability to proactively protect its processes to the larger issue of our collective vulnerability in cloud implementations. 


How To Implement The Pipeline Design Pattern in C#

The pipeline design pattern in C# is a valuable tool for software engineers looking to optimize data processing. By breaking down a complex process into multiple stages, and then executing those stages in parallel, engineers can dramatically reduce the processing time required. This design pattern also simplifies complex operations and enables engineers to build scalable data processing pipelines. ...The pipeline design pattern is commonly used in software engineering for efficient data processing. This design pattern utilizes a series of stages to process data, with each stage passing its output to the next stage as input. The pipeline structure is made up of three components: The source: Where the data enters the pipeline; The stages: Each stage is responsible for processing the data in a particular way; The sink: Where the final output goes Implementing the pipeline design pattern offers several benefits, with one of the most significant benefits in efficiency of processing large amounts of data. By breaking down the data processing into smaller stages, the pipeline can handle larger datasets. The pattern also allows for easy scalability, making it easy to add additional stages as needed. 


Accuracy Improves When Large Language Models Collaborate

Not surprisingly, this idea of group-based collaboration also makes sense with large language models (LLMs), as recent research from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) is now showing. In particular, the study focused on getting a group of these powerful AI systems to work with each other using a kind of “discuss and debate” approach, in order to arrive at the best and most factually accurate answer. Powerful large language model AI systems, like OpenAI’s GPT-4 and Meta’s open source LLaMA 2, have been attracting a lot of attention lately with their ability to generate convincing human-like textual responses about history, politics and mathematical problems, as well as producing passable code, marketing copy and poetry. However, the tendency of these AI tools to “hallucinate”, or come up with plausible but false answers, is well-documented; thus making LLMs potentially unreliable as a source of verified information. To tackle this problem, the MIT team claims that the tendency of LLMs to generate inaccurate information will be significantly reduced with their collaborative approach, especially when combined with other methods like better prompt design, verification and scratchpads for breaking down a larger computational task into smaller, intermediate steps.


There's AI, and Then There's AGI: What You Need to Know to Tell the Difference

For starters, the ability to perform multiple tasks, as an AGI would, does not imply consciousness or self-will. And even if an AI had self-determination, the number of steps required to decide to wipe out humanity and then make progress toward that goal is too many to be realistically possible. "There's a lot of things that I would say are not hard evidence or proof, but are working against that narrative [of robots killing us all someday]," Riedl said. He also pointed to the issue of planning, which he defined as "thinking ahead into your own future to decide what to do to solve a problem that you've never solved before." LLMs are trained on historical data and are very good at using old information like itineraries to address new problems, like how to plan a vacation. But other problems require thinking about the future. "How does an AI system think ahead and plan how to eliminate its adversaries when there is no historical information about that ever happening?" Riedl asked. "You would require … planning and look ahead and hypotheticals that don't exist yet … there's this big black hole of capabilities that humans can do that AI is just really, really bad at."


Metaverse and the future of product interaction

As the metaverse continues to evolve, so must the approach to product design. This includes considering how familiar objects can be repurposed as functional interface elements in a virtual environment. Additionally, understanding the dynamics of group interactions in virtual spaces is crucial. Designers must anticipate these trends and adapt their designs accordingly, ensuring that products remain relevant and engaging in the ever-changing landscape of the metaverse. In India, the metaverse presents significant opportunities for businesses to redefine consumer experiences. It opens up possibilities for more interactive, personalised, and adventurous engagements with customers. This not only increases customer engagement and loyalty but also creates new avenues for value exchange and revenue streams. The metaverse, with its potential to impact diverse sectors like communications, retail, manufacturing, education, and banking, is poised to be a game-changer in the Indian market. ... As the metaverse continues to expand its reach and influence, businesses and designers in India and around the world must evolve to meet the demands of this new digital era.


Build trust to win out with genAI

Businesses need to adopt ‘responsible technology’ practices, which will give them a powerful lever that enables them to deploy innovative genAI solutions while building trust with consumers. Responsible tech is a philosophy that aligns an organization’s use of technology to both individuals’ and society’s interests. It includes developing tools, methodologies, and frameworks that observe these principles at every stage of the product development cycle. This ensures that ethical concerns are baked in at the outset. This approach is gaining momentum, as people realize how technologies such as genAI, can impact their daily lives. Even organizations such as the United Nations are codifying their approach to responsible tech. Consumers urgently want organizations to be responsible and transparent with their use of genAI. This can be a challenge because, when it comes to transparency, there are a multitude of factors to consider, including everything from acknowledging AI is being used to disclosing what data sources are used, what the steps were taken to reduce bias, how accurate the system is, or even the carbon footprint associated with the genAI system.



Quote for the day:

"Entrepreneurs average 3.8 failures before final success. What sets the successful ones apart is their amazing persistence." -- Lisa M. Amos

Daily Tech Digest - February 18, 2024

Remote Leadership Strategies for Sustained Engagement

The leaders foresee a future where AI and collaboration technologies continue to reduce the friction of remote working and increase collaboration in the virtual world. “With the release of solutions such as Apple Vision, this will be the start of truly immersive remote leadership and collaboration that is both inclusive and focussed on employee wellbeing,” Boast says. “All this said, I hope we continue to make an effort to meet in person periodically to refresh and renew connections.” For Ratnavira, leaders have a critical role in fostering trust, continuous communication, and feedback, which is key to unlocking the full potential of a remote workforce and building high-performance teams. “A culture-first organization intuitively figures remote work because there is a lot of trust placed in individuals and investment made in their overall growth,” says Sambandam. Remote work models have proven that success can thrive in this transformative approach. “What was once the ‘new normal’ is now etched into the fabric of our operations,” he adds. “This isn’t a temporary shift; it’s a paradigm shift with no point of return.”


The Rise of Small Language Models

Small language models are essentially more streamlined versions of LLMs, in regards to the size of their neural networks, and simpler architectures. Compared to LLMs, SLMs have fewer parameters and don’t need as much data and time to be trained — think minutes or a few hours of training time, versus many hours to even days to train a LLM. Because of their smaller size, SLMs are therefore generally more efficient and more straightforward to implement on-site, or on smaller devices. Moreover, because SLMs can be tailored to more narrow and specific applications, that makes them more practical for companies that require a language model that is trained on more limited datasets, and can be fine-tuned for a particular domain. Additionally, SLMs can be customized to meet an organization’s specific requirements for security and privacy. Thanks to their smaller codebases, the relative simplicity of SLMs also reduces their vulnerability to malicious attacks by minimizing potential surfaces for security breaches. On the flip side, the increased efficiency and agility of SLMs may translate to slightly reduced language processing abilities, depending on the benchmarks the model is being measured against.


Why software 'security debt' is becoming a serious problem for developers

Larger tech enterprises appear to be the most likely to have critical levels of security debt, according to the report, with over three times as many large tech firms found to have critical security debt compared to government organizations. The flaws that make up this debt were found in both the first-party code and third party application code taken from open source libraries, for example. The study found nearly two-thirds (63%) of the applications scanned had flaws in the first-party code, compared to 70% that had flaws in their third-party code. ... Eng’s advice for reducing security debt caused by flaws in first party code is to better integrate security testing into the entire software development lifecycle (SDLC) to ensure devs catch issues earlier in the process. If developers are forced to carry out security testing before they can merge new code into the main repository, this would go a long way in reducing flaws in first party code, Eng argued. But, Eng noted, this is not how the majority of businesses operate their development teams. “The problem is not every company is doing security testing at that level of granularity. 


Mythbust Your Way to Modern Data Management

Enterprises often believe there is one path for data compression. They may think that data compression is done exclusively in software on the host CPU. Because the CPU does the processing, there is the risk of a performance penalty under load, making it a non-starter for critical performance workloads. In the same way, the data pipeline within your organization is unique and tailored to your requirements, and architecting how data flows offers plenty of options. Data compression can be done in many ways, and the outcomes of choosing how and where compression should be processed can lead to benefits that cascade throughout the architecture. ... How can you improve the overall cost of ownership of your infrastructure? How can you increase storage and performance while decreasing power consumption? How can you make the data center more sustainable? When organizations try to solve these sorts of problems, data compression may not immediately leap to mind as the answer. Data compression doesn’t get more attention because organizations simply aren’t thinking about it as a problem-solving tool. This becomes clear when you look at search trends related to data and see that “enterprise data compression” is orders of magnitude lower down the results than something like “data management.”


Want to be a data scientist? Do these 4 things, according to business leaders

"You have to try new tech continuously," he says. "Don't hesitate to use generative AI to help you complete your job. Now, you can write code by saying to a model, 'Okay, write me something that does this.' So, be open -- embrace the tech. I think that's important." Martin says that he's not your typical chief data officer (CDO). Rather than just focusing on leadership concerns, he still gets his hands dirty with code -- and he advises up-and-coming data talent to do the same. "It's important if you want to get ahead that you understand what you're doing and that you're playing with tech," he says. "It gives me an edge, especially in mathematics and data science. I know about statistics, and I can build models myself." ... "While we can talk about math expertise, which is important because you need some level of academic capability, I think more important than that, certainly when I'm recruiting, is that I'm looking for the rounded individual," he says. "The straight A-grade student is great, but that person might not always be the best fit, because they've got to manage their time, they need to interact with the business, and they need to go and talk with stakeholders from across the business."


The best part of working in data and AI is the constant change

AI and analytics is such a vast field today that it gives people the freedom to chart their own course. You can choose to deep dive into an area of data – such as data governance, data management, data privacy, or become a data scientist working with ML models. You can take on the more technical roles of data engineering, data architecture, or take a more holistic advisory role in consulting the client on their end-to-end data and AI strategy. You can choose to work for a consulting firm like Accenture and help solve problems for clients across industries or be part of an organisation’s internal data teams. The field of AI and analytics offers many career paths and is only going to grow as we head towards a future underpinned by data and AI. ... While technical skills underpin many roles in the space and should be developed consistently, logical reasoning, strategic thinking, industry knowledge etc, play an important part as well. My advice is to build a network of mentors and peers who can be your guides in your career journey. The support and wisdom of those who have walked this path before can be invaluable. But, equally, trust your unique perspective and voice. Your diversity of thought is a strength that will set you apart.


A quantum-safe cryptography DNSSEC testbed

In the context of the DNS, DNSSEC may no longer guarantee authentication and integrity when powerful quantum computers become available. For the end user, this means that they can no longer be sure that when they browse to example.nl they will end up at the correct website (spoofing). They may also receive more spam and phishing emails since modern email security protocols rely on DNSSEC as well. Fortunately, cryptographers are working on creating cryptographic algorithms resistant to quantum computer attacks — so-called quantum-safe cryptographic algorithms. However, those quantum-safe algorithms often have very different characteristics than their non-quantum-safe counterparts, such as signature sizes, computation time requirements, memory requirements and, in some cases, key management requirements. As a consequence, those quantum-safe algorithms are not drop-in replacements for today’s algorithms. For DNSSEC, it is already known that there are stringent requirements when it comes to, for example, signature sizes and validation speed. But other factors, such as the size of the zone file, also have implications for the suitability of algorithms.


Someone had to say it: Scientists propose AI apocalypse kill switches

In theory, this could allow watchdogs to respond faster to abuses of sensitive technologies by cutting off access to chips remotely, but the authors warn that doing so isn't without risk. The implication being, if implemented incorrectly, that such a kill switch could become a target for cybercriminals to exploit. Another proposal would require multiple parties to sign off on potentially risky AI training tasks before they can be deployed at scale. "Nuclear weapons use similar mechanisms called permissive action links," they wrote. For nuclear weapons, these security locks are designed to prevent one person from going rogue and launching a first strike. For AI however, the idea is that if an individual or company wanted to train a model over a certain threshold in the cloud, they'd first need to get authorization to do so. Though a potent tool, the researchers observe that this could backfire by preventing the development of desirable AI. The argument seems to be that while the use of nuclear weapons has a pretty clear-cut outcome, AI isn't always so black and white. But if this feels a little too dystopian for your tastes, the paper dedicates an entire section to reallocating AI resources for the betterment of society as a whole.


Cloud mastery is a journey

A secure foundation is required for developing an enterprise’s strong digital immunity. This entails various aspects like safeguarding against hackers, disaster recovery strategies, and designing robust systems. Enterprises employ the defense-in-depth approach for protection against hackers. It means that every element of an IT environment should be built robustly and securely. For this, a few practical strategies include employing AI-powered firewalls, System Information Event Management, strong identity authentication, antivirus tools, vulnerability management, and teams of ethical hackers for simulated attacks. The cloud can be a powerful asset for building backup systems and disaster recovery plans. These are critical to combat potential data center failures caused by an event like a storm, fire or electrical outage. Focusing on resilience is equally important and extends beyond robust software. Resiliency means addressing every possible failure and threat in securing and maintaining the availability of systems, data and networks. For example, failures in services like firewalls and content distribution networks might be rare but are plausible. 


It’s Time to End the Myth of Untouchable Mainframe Security.

It is critical for mainframe security to re-enter the cybersecurity conversation, and that starts with doing away with commonly held misconceptions. First is the mistaken belief that due to their mature or streamlined architecture with fewer vulnerabilities, mainframes are virtually impervious to hackers. There is the misconception that they exist in isolation within the enterprise IT framework, disconnected from the external world where genuine threats lurk. And then there’s the age factor. People newer to the profession have relatively little experience with mainframe systems when compared to their more experienced counterparts and will tend to not question their viewpoints or approaches of their leaders or senior team members. This state of affairs can’t continue. In the contemporary landscape, modern mainframes are routinely accessed by employees and are intricately linked to applications that encompass a wide array of functions, ranging from processing e-commerce transactions to facilitating personal banking services. The implications of a breach can’t be overstated. 



Quote for the day:

"When you do what you fear most, then you can do anything." -- Stephen Richards

Daily Tech Digest - February 17, 2024

Europe’s Digital Services Act applies in full from tomorrow - here’s what you need to know

In one early sign of potentially interesting times ahead, Ireland’s Coimisiún na Meán has recently been consulting on rules for video sharing platforms that could force them to switch off profiling-based content feeds by default in that local market. In that case the policy proposal was being made under EU audio visual rules, not the DSA, but given how many major platforms are located in Ireland the Coimisiún na Meán, as DSC, could spin up some interesting regulatory experiments if it take a similar approach when it comes to applying the DSA on the likes of Meta, TikTok, X and other tech giants. Another interesting question is how the DSA might be applied to fast-scaling generative AI tools. The viral rise of AI chatbots like OpenAI’s ChatGPT occurred after EU lawmakers had drafted and agreed the DSA. But the intent for the regulation was for it to be futureproofed and able to apply to new types of platforms and services as they arise. Asked about this, a Commission official said they have identified two different situations vis-à-vis generative AI tools: One where a VLOP is embedding this type of AI into an in-scope platform  — where they said the DSA does already apply. 


Composable Architectures vs. Microservices: Which Is Best?

Composable architecture is a modular approach to software design and development that builds flexible, reusable and adaptable software architecture. It entails breaking down extensive, monolithic platforms into small, specialized, reusable and independent components. This architectural pattern comprises a pluggable array of modular components, such as microservices, packaged business capability (PBC), headless architecture and API-first development that can be seamlessly replaced, assembled and configured to align with business requirements. In a composable application, each component is developed independently using the technologies best suited to the application’s functions and purpose. This enables businesses to build customized solutions that can swiftly adapt to business needs. ... The composable approach has gained significant popularity in e-commerce applications and web development for enhancing the digital experience for developers, customers and retailers, with industry leaders like Shopify and Amazon taking advantage of its benefits.


Nginx core developer quits project in security dispute, starts “freenginx” fork

Comments on Hacker News, including one by a purported employee of F5, suggest Dounin opposed the assigning of published CVEs to bugs in aspects of QUIC. While QUIC is not enabled in the most default Nginx setup, it is included in the application's "mainline" version, which, according to the Nginx documentation, contains "the latest features and bug fixes and is always up to date." ... MZMegaZone confirmed the relationship between security disclosures and Dounin's departure. "All I know is he objected to our decision to assign CVEs, was not happy that we did, and the timing does not appear coincidental," MZMegaZone wrote on Hacker News. He later added, "I don't think having the CVEs should reflect poorly on NGINX or Maxim. I'm sorry he feels the way he does, but I hold no ill will toward him and wish him success, seriously." Dounin, reached by email, pointed to his mailing list responses for clarification. He added, "Essentially, F5 ignored both the project policy and joint developers' position, without any discussion." MegaZone wrote to Ars (noting that he only spoke for himself and not F5), stating, "It's an unfortunate situation, but I think we did the right thing for the users in assigning CVEs and following public disclosure practices. 


Bridging Silos and Overcoming Collaboration Antipatterns in Multidisciplinary Organizations

Some problems with these anti-patterns. I'm going to talk again in threes, I've talked about three anti-patterns, one role across many teams, product versus engineering wars, and X-led. I'm going to talk about some of the problems with these. The first one is one group holds the power. One group holds all the decision-making power, and others can't properly contribute. They aren't given the opportunity to contribute. In our first example, Anita the designer doesn't hold any power because all she's doing is playing catch-up. She's got no time to really contribute to decisions. In the second anti-pattern in the product versus engineering, there's always a battle between who holds the power. It's not collaborative, there's silos between the two. ... Professional protectionism is about people protecting their professional boundaries and not letting other people step into them. It's like, "No, this is my area, you stay over there and you do your thing and I'll do my thing over here." Maybe some people have experienced this. For example, I was working with an organization recently and they said the user research team didn't want to publish how they did user research, because other people might do it.


Scalability Challenges in Microservices Architecture: A DevOps Perspective

Although microservices architectures naturally lend themselves to scalability, challenges remain as systems grow in size and complexity. Efficiently managing how services discover each other and distribute loads becomes complex as the number of microservices increases. Communication across complex systems also introduces a degree of latency, especially with increased traffic, and leads to an increased attack surface, raising security concerns. Microservices architectures also tend to be more expensive to implement than monolithic architectures. Creating secure, robust, and well-performing microservices architectures begins with design. Domain-driven design plays a vital role in developing services that are cohesive, loosely coupled, and aligned with business capabilities. Within a genuinely scalable architecture, every service can be deployed, scaled, and updated autonomously without affecting the others. One essential aspect of effectively managing microservices architecture involves adopting a decentralized governance model, in which each microservice has a dedicated team in charge of making decisions related to the service


CQRS Pattern in C# and Clean Architecture – A Simplified Beginner’s Guide

When implementing Clean Architecture in C#, it’s important to recognize the role each of the four components plays. Entities and Use Cases represent the application’s core business logic, Interface Adapters manage the communication between the Use Cases and Infrastructure components, and Infrastructure represents the outermost layer of the architecture. To implement Clean Architecture successfully, we have some best practices to keep in mind. For instance, Entities and Use Cases should be agnostic to the infrastructure and use plain C# classes, providing a decoupled architecture that avoids excess maintenance. Additionally, applying the SOLID principles ensures that the code is flexible and easily extensible. Lastly, implementing use cases asynchronously can help guarantee better scalability. Each component of Clean Architecture has a specific role to play in the implementation of the overall architecture. Entities represent the business objects, Use Cases implement the business logic, Interface Adapters handle interface translations, and Infrastructure manages the communication to the outside world. 


AI in practice - Celonis’ VP shares how AI can support system & process change

Brown said that, at the moment, Celonis is seeing AI being used to expedite often tedious work, or work that often is prone to human error. Looking back at the adoption of previous general purpose technologies, this makes sense. More often than not the tools that are adopted early on are applied to use cases that take time, don’t add a significant amount of value, and where mistakes are easily made by people. ... Brown also had some thoughts regarding how enterprises should consider their approach to AI adoption, with a focus on not isolating people away from the technology - keeping them close to the change and bringing them along on the journey. Firstly, Brown acknowledged that this is going to be challenging, given the tendency for employees to ‘build empires’ within enterprises and protect them at all costs. She said: I'll go back to a phrase I used for a long, long time and I still use: people don't hurt what they own. So if I'm invested in it, and it's part of what I care about, I'm going to protect it and grow it. If I boil down change management into one sentence, it’s about expectations and accountability. So, what can I expect to be different and what do I need to do differently?


Open Agile Architecture: A Comprehensive Guide for Enterprise Architecture Professionals

Open Agile Architecture equips you with a methodology that seamlessly integrates Agile principles into the realm of enterprise architecture. In today's business environment, change is constant. Open Agile Architecture allows you to respond swiftly and effectively to evolving business needs, technological advancements, and market dynamics. ... Collaboration is at the heart of Agile methodologies, and Open Agile Architecture extends this principle to enterprise architecture. By promoting cross-functional collaboration and open communication, the methodology breaks down silos within the organization. As a practitioner, you'll experience improved collaboration between business and IT teams, fostering a shared understanding of goals and priorities. ... Open Agile Architecture emphasizes an iterative and incremental approach to development. This means that instead of long, rigid planning cycles, you work on delivering incremental value in shorter iterations. This not only ensures continuous progress but also allows you to demonstrate tangible outcomes to stakeholders regularly.


Microsoft Copilot is preparing advances in data protection

As the company has revealed through the Bing blog, Copilot is being prepared to maximize the protection of user and company data that use this system. With this, Microsoft wants to make it clear that the company’s priority is to show that it has no interest in user data while using Copilot services in its 365 versions. It is evident that Copilot is becoming a fundamental piece of Microsoft’s brand strategy, and precisely for that reason they want to distance it from some of the main stigmas that AI currently has. For its part, Copilot is already deeply integrated into various Microsoft services, such as Bing or Teams, where it offers considerable support to the user. One of the concerns that many users have when using Artificial Intelligence is the mere fact of being part of the learning and training process by the AI. As these tools are constantly evolving, many of these systems used the users’ own usage to create variations and advancements in various subjects. However, over time, it has been shown that, in many cases, this has ended up “dumbing down” the AI. However, many users find it quite ironic that an AI, which is trained precisely by collecting data massively illicitly through the Internet, has to actively demonstrate a system that ensures that Copilot will not use user data to continue improving.


Artificial intelligence needs to be trained on culturally diverse datasets to avoid bias

Culture plays a significant role in shaping our communication styles and worldviews. Just like cross-cultural human interactions can lead to miscommunications, users from diverse cultures that are interacting with conversational AI tools may feel misunderstood and experience them as less useful. To be better understood by AI tools, users may adapt their communication styles in a manner similar to how people learned to “Americanize” their foreign accents in order to operate personal assistants like Siri and Alexa. ... AI is already in use as the backbone of various applications that make decisions affecting people’s lives, such as resume filtering, rental applications and social benefits applications. For years, AI researchers have been warning that these models learn not only “good” statistical associations — such as considering experience as a desired property for a job candidate — but also “bad” statistical associations, such as considering women as less qualified for tech positions. As LLMs are increasingly used for automating such processes, one can imagine that the North American bias learned by these models can result in discrimination against people from diverse cultures.



Quote for the day:

''Failure will never overtake me if my de determination to succeed is strong enough.'' -- Og Mandino

Daily Tech Digest - February 16, 2024

GitHub: AI helps developers write safer code, but you need to get the basics right

With cybercriminals largely sticking to the same tactics, it is critical that security starts with the developer. "You can buy tools to prevent and detect vulnerabilities, but the first thing you need to do is help developers ensure they're building secure applications," Hanley said in a video interview with ZDNET. As major software tools, including those that power video-conferencing calls and autonomous cars, are built and their libraries made available on GitHub, if the accounts of people maintaining these applications are not properly secured, malicious hackers can take over these accounts and compromise a library. The damage can be wide-reaching and lead to another third-party breach, such as the likes of SolarWinds and Log4j, he noted. Hanley joined GitHub in 2021, taking on the newly created role of CSO as news of the colossal SolarWinds attack spread. "We still tell people to turn on 2FA...getting the basics is a priority," he said. He pointed to GitHub's efforts to mandate the use of 2FA for all users, which is a process that has been in the works during the last one and a half years and will be completed early this year. 


Why Tomago Aluminium reversed course on its cloud journey

“An ERP solution like ours is massive,” he says, highlighting that this can make it difficult to keep track of everything you are, and not, using. For instance, he says if you’re getting charged $20,000 for electricity, you might want to check your meter and verify that your usage and bill align. “If your electricity meter is locked away and you just get a piece of paper at the end of the month telling you everything’s fine and you owe $20 000, you’re probably going to ask some questions,” he says. Tomago was told everything was secure and running as it should, but they had no way to verify what they were being told was accurate. “We essentially had a swarm of big black boxes,” he says. “We put dollars in and got services out, but couldn’t say to the board, with confidence, that we were really in control of things like compliance, security, and due diligence.” Then in 2020, Tomago moved its ERP system back on-prem — a decision that’s paying dividends. “We now know what our position is from a cyber perspective because we know exactly what our growth rates are, and we know that our systems are up-to-date, and what our cost is because it’s the same every month,” he says.


OpenAI and Microsoft Terminate State-Backed Hacker Accounts

Threat actors linked to Iran and North Korea also used GPT-4, OpenAI said. Nation-state hackers primarily used the chatbot to query open-source information, such as satellite communication protocols, and to translate content into victims' local languages, find coding errors and run basic coding tasks. "The identified OpenAI accounts associated with these actors were terminated," OpenAI said. It conducted the operation in collaboration with Microsoft. Microsoft and OpenAI have not yet observed particularly novel or unique AI-enabled attack or abuse techniques resulting from threat actors' usage of AI," the Redmond, Washington-based technology giant said. Microsoft's relationship with OpenAI is under scrutiny by multiple national antitrust authorities. A British government study published earlier this month concluded that large language models may boost the capabilities of novice hackers but so far are of little use to advanced threat actors. China-affiliated Charcoal Typhoon used ChatGPT to research companies and cybersecurity tools, debug code and generate scripts, and create content likely for use in phishing campaigns. 


Why Most Founders and Investors Are Wrong About Disruption

Recognizing disruption requires an open mind. In many instances, people can't believe or see something is disruptive at first. They think the idea is foolish or won't work. Disruption is usually caused by something that hasn't existed before or something new. Airbnb is a great example here as well. Its founders are said to have gone to every venture capitalist in Silicon Valley and were famously laughed out of meetings. People couldn't see what they saw — it hadn't been invented yet. Even the most seasoned business leaders can misunderstand and mistake disruption or fail to recognize it. Disruption doesn't always mean extinction. History has proven this for countless companies, processes, products, services, and ideas. Organizations can collapse after big changes. They did not or could not adapt. But something new or different tends to fill in the gap. It's often better, and the cycle continues. I have been on both sides of disruption at my company, BriteCo. We are one of the jewelry industry's disruptors – we were the first to move jewelry consumers to 100% paperless processes with technology and the internet. We also provide our customers with different ways to buy our coverage, unique to BriteCo, versus an outdated analog process at the retail point of sale.


Will generative AI kill KYC authentication?

Lee Mallon, the chief technology officer at AI vendor Humanity.run, sees an LLM cybersecurity threat that goes way beyond quickly making false documents. He worries that thieves could use LLMs to create deep back stories for their frauds in case someone at a bank or government level reviews social media posts and websites to see if a person truly exists. “Could social media platforms be getting seeded right now with AI-generated life histories and images, laying the groundwork for elaborate KYC frauds years down the line? A fraudster could feasibly build a ‘credible’ online history, complete with realistic photos and life events, to bypass traditional KYC checks. The data, though artificially generated, would seem perfectly plausible to anyone conducting a cursory social media background check,” Mallon says. “This isn’t a scheme that requires a quick payoff. By slowly drip-feeding artificial data onto social media platforms over a period of years, a fraudster could create a persona that withstands even the most thorough scrutiny. By the time they decide to use this fabricated identity for financial gains, tracking the origins of the fraud becomes an immensely complex task.”


Generative AI: Shaping a New Future for Fraud Prevention

A new category called "AI Risk Decisioning" is poised to transform the landscape of fraud detection. It leverages the strengths of generative AI, combining them with traditional machine learning techniques to create a robust foundation for safeguarding online transactions. ... The first pillar involves creating a comprehensive knowledge fabric that serves as the foundation for the entire platform. This fabric integrates various internal data sources unique to the company, such as transaction records and real-time customer profiles. ... The third pillar of the AI Risk Decisioning approach focuses on automatic recommendations, offering powerful capabilities for real-time and effective risk management. It can automatically monitor transactions and identify trends or anomalies, suggest relevant features for risk models, conduct scenario analyses independently, and recommend the next best action to optimize performance. ... The fourth pillar of the AI Risk Decisioning approach emphasizes human-understandable reasoning. This pillar aims to make every decision, recommendation, or insight provided by the AI system easily understandable to human users.


Implementing a Digital Transformation Strategy

Actionable intelligence has been accepted as the “new normal” of the data-first enterprise. In the data-first enterprise, data and digital technologies not only open up innovative revenue channels but also create the most compliant (governed) business operations. However, in order for an enterprise to successfully plan, develop, and execute a data-first operating model, the business owners and operators have to first develop a digital transformation strategy – connecting the data piles, digital technologies, business processes, and marketing staff. The digital transformation strategy develops around the need to bridge the gaps between the current data-driven goals and processes and intended future business goals and processes. In a nutshell, the digital transformation strategy strikes a harmonious balance between traditional IT and marketing functions. Global businesses have witnessed firsthand the immense benefits of digital processes, such as improved efficiencies, reduced operating costs, and growth of additional revenue channels. A recent industry survey report indicated that 92% of businesses are already pursuing digital transformation in more than one way. However, the transformation across businesses is at various stages of maturity.


Planning a data lake? Prepare for these 7 challenges

Storing data in a central location simplifies compliance in the sense that you know where your data resides, though it also creates compliance challenges. If you store many different types of data in your lake, different assets may be subject to different compliance standards. Data that contains personally identical information (PII), for instance, must be managed differently in some ways than other types of data to comply with laws like DPA, GDPR or HIPAA. While a data lake won’t prevent you from applying granular security controls to different data assets, it doesn't make it easier, either – and it can make it more difficult if your security and compliance tools are not capable of applying different policies to different data assets within a centralized repository. ... Placing your data into a central location to create a data lake is one thing but connecting it to various applications and the workforce who needs access is another. Until you develop the necessary data integrations – and unless you keep them up to date – your data lake will deliver little value. Building data integrations takes time, effort, and expertise and users sometimes underestimate how difficult it is to create successful data integrations. Be sure and prioritize data integration strategy as part of your overall process.


Does Cloud Native Change Developer Productivity and Experience?

When management focuses too much on developer productivity, developer experience can suffer and thus hurt morale and, paradoxically, productivity as well. It’s important for management to have a light touch to avoid this problem, especially with cloud native. Cloud native environments can become so dynamic and noisy that both productivity and developer experience can decline. Management must take special care to support its developers with the right platforms, tools, processes and productivity metrics to facilitate the best outcomes, leveraging platform engineering to create and manage IDPs that facilitate cloud native development despite its inherent complexity. After all, the complexity of cloud native development alone isn’t the problem. Complexity presents challenges to be sure, but developers are always up for a challenge. Complexity coupled with a lack of visibility brings frustration, lowering productivity and DX. With the right observability, for example, with Chronosphere and Google Cloud, developers have a good shot at untangling cloud native’s inherent complexity, delivering quality software on time and on budget, while maintaining both productivity and DX.


Vulnerability to Resilience: Vision for Cloud Security

In the recent era of cloud-native development and DevSecOps, CISOs face the challenge of fostering a security-conscious culture that spans across various cross-functional teams. However, by adopting deliberate, disruptive, engaging, and enjoyable approaches that also provide a return on investment, a sustainable security culture can be achieved. It is essential to instill the concept of shared responsibility for security and focus on enhancing awareness and adhering to advanced security practices. If you don't already have a secure development lifecycle, it is imperative to integrate one immediately. Recognizing and rewarding individuals who prioritize security is one of the ways to encourage a security-focused culture. Additionally, creating a security community and making security more engaging and enjoyable can also help cultivate a sustainable security culture. CISOs should leverage technical tools and best practices to facilitate the seamless integration of security into the Continuous Integration/Continuous Deployment (CI/CD) pipeline. This can be achieved through various measures, such as conducting threat modeling, adopting a shift-left security approach, incorporating IDE security ...



Quote for the day:

"You may have to fight a battle more than once to win it." -- Margaret Thatcher

Daily Tech Digest - February 15, 2024

CISO and CIO Convergence: Ready or Not, Here It Comes

While CIOs are still responsible for setting and meeting technology goals and for staying on budget, their primary mandate is determining how the company can harness technology to innovate, and then procure and manage those resources. While plenty of companies still maintain large, on-premise IT estate, it's just a matter of time before they digitally transform. Either way, the CIO role has become markedly less operational over time. On the other hand, the profile of CISOs has been growing since the early 2000s, set against a non-stop carousel of compliance mandates, data breaches, and emerging cybersecurity threats. While data breaches may have forced businesses to pay attention to security, it was compliance mandates that funded it. From HIPAA and PCI DSS to GDPR, SOC 2, and more, compliance has been a double-edged sword for CISOs. Compliance increased the role of cybersecurity teams and made them more visible across IT and the business as a whole, providing CISOs with bigger budgets and increased latitude on how to spend it. However, all the effort they put into compliance did little to stymie phishing, ransomware, big breaches, and/or malicious insiders. 


Will Generative AI Kill DevSecOps?

Beyond having automation and guardrails in place, you also need security policies at the company level, Moisset said, to make sure that DevSecOps understands all the generative AI tools colleagues are using. Then you can educate them on how to use it, like creating and communicating a generative AI policy. Because a total ban on GenAI just won’t fly. When Italy temporarily banned ChatGPT, Foxwell said there was a visible decrease in productivity across the country’s GitHub organizations, but, when it was reinstated, “what also picked up was the usage of tools that circumvented all of the government policies and firewalls around the prevention of using these” tools. Engineers always find a way. Particularly when using generative AI for customer service chatbots, Moisset said, you need guardrails in place around both the inputs and outputs, as malicious actors can potentially “socialize” the chatbot via prompt injection to give a desired result — like when someone was able to buy a Chevy for $1 from a chatbot. “It’s back to educating the users and developers that it’s good to use AI, we should be using AI, but we need to actually put guardrails around it,” she said, which also demands an understanding of how your customers interact with GenAI.


Combining heat and compute

Data centers offer a predictable supply of heat because they keep their servers running continuously. But the heat is “low-grade:” It is warm rather than hot, and it comes in the form of air, which is difficult to transport. So, most data centers vent their heat to the atmosphere. Sometimes, there are district heat networks, which provide warmth to local homes and businesses through a piped network. If your data center is near one of these, it is a matter of extending it to connect to the data center, and boosting the grade of heat. But you have to be in the right place to connect to one. “There are certain countries that have established or developing heat networks, but the majority don't have a heat network per se, so it's going on a piecemeal basis,” Neal Kalita, senior director of power and energy at NTT, tells DCD. You are unlikely to find one in the US, says Rolf Brink of cooling consultancy Promersion: “The United States is a fundamentally different ecosystem. But Europe is a lot more dense in terms of population, and there is more heat demand.” The Nordic countries have a lot of heat networks. Stockholm Data Parks is a well-known example - a data center campus in urban Stockholm, where every data center has a connection to the district heating network and gets paid for its heat.


Harmonizing human potential and AI: The evolution of work in the digital era

The evolving landscape of work is witnessing a profound transformation as the fusion of human potential with AI takes center stage. Concerns about the ethical implications of AI are well-known, including the potential for perpetuating bias and discrimination and its impact on employment and job security. Ensuring that AI is developed and deployed ethically and responsibly is crucial, taking into account fairness, transparency and accountability. ... Optimizing human-centric capabilities with automation and an AI-first mindset is significant for long-term success. Consider a telecoms operator with employees struggling to grapple with the labor-intensive process of manually reviewing a high volume of mobile tower lease contracts. By embracing an AI-powered platform equipped with capabilities for faster and more accurate extraction of contract clauses, employees were able to shift their focus toward leveraging hidden risks identified by the platform. This enabled the renegotiation of existing contracts, leading to millions of dollars in savings. It’s no coincidence that the enterprises that are more inclined to augment human potential are those resilient enough to maximize the value of AI-led transformations. 


5 Wi-Fi vulnerabilities you need to know about

Like wired networks, Wi-Fi is susceptible to Denial of Service (DoS) attacks, which can overwhelm a Wi-Fi network with excessive amount of traffic. This can cause the Wi-Fi to become slow or unavailable, disrupting normal operations of the network, or even the business. A DoS attack can be launched by generating a large number of connection or authentication requests, or injecting the network with other bogus data to break the Wi-Fi. ... Wi-jacking occurs when a Wi-Fi-connected device has been accessed or taken over by an attacker. The attacker could retrieve saved Wi-Fi passwords or network authentication credentials on the computer or device. Then they could also install malware, spyware, or other software on the device. They could also manipulate the device’s settings, including the Wi-Fi configuration, to make the device connect to rogue APs. ... RF interference can cause Wi-Fi disruptions. Instead of being caused by bad actors, RF interference could be triggered by poor network design, building changes, or other electronics emitting or leaking into the RF space. Interference can result in degraded performance, reduced throughput, and increased latency.


AI outsourcing: A strategic guide to managing third-party risks

Bias may persist in many face detection systems. Naturally, this misidentification could have severe consequences for the parties involved. Diverse training data and transparent algorithms are necessary to mitigate the risk of discriminatory outcomes. Furthermore, complex AI models often encounter the “black box” problem or how some AI models arrive at their decisions. Teaming with a third-party AI service requires human oversight to navigate the threat of biased algorithms. ... Most of us can admit that the risk of becoming overly reliant on AI is significant. AI can quickly become a go-to solution for many challenges. It’s no surprise that companies face a similar risk, becoming too dependent on a single vendor’s AI solutions. However, this approach can become problematic. Companies can “get stuck,” and switching providers seems almost impossible. ... Quality and reliability concerns are top-of-mind for most company leaders partnering with third-party AI services. Some primary concerns include service outages, performance issues, and unexpected disruptions. Operational resilience is necessary, and contingency plans are a significant piece of the resiliency puzzle, given the damage business downtime can cause. 


Practices for Implementing an Effective Data Governance Strategy

Ensuring the integrity and usability of data within an organization requires implementing clear data quality standards and metrics. These standards serve as a benchmark for data quality, guiding data management practices and ensuring that data is accurate, complete, and reliable. Organizations can streamline their data governance processes by defining what constitutes quality data, making it easier to identify and rectify data issues. This approach enhances data quality, supports compliance with regulatory requirements, and improves decision-making capabilities. Developing a comprehensive set of data quality metrics is crucial for monitoring and maintaining high data standards. These metrics should be aligned with the organization’s strategic objectives and include criteria such as accuracy, completeness, consistency, timeliness, and uniqueness. ... Creating an environment where data stewardship and accountability are at the forefront requires strategic planning and commitment from all levels of an organization. It is essential to embed data governance principles into the corporate culture, ensuring that every team member understands their role in maintaining data integrity and security.


What is the impact of AI on storage and compliance?

Right now, when you look at traditional storage, generally speaking you look at your environment, your ecosystem, your data, classifying that data, and putting a value on it. And, depending on that value and the potential impact, you put in the right security and assign the length of time you need to keep the data and how you keep it, delete it. But, if you look at a CRM [customer relationship management service], if you put the wrong data in then the wrong data comes out, and it’s one set of data. So, to be blunt, garbage in, garbage out. With AI, it’s much more complex than that, so you may have garbage in, but instead of one dataset out that might be garbage, there might be a lot of different datasets and they may or may not be accurate. If you look at ChatGPT, it’s a little bit like a narcissist. It’s never wrong and if you give it some information and then it spits out the wrong information and then you say, “No, that’s not accurate”, it will tell you that’s because you didn’t give it the right dataset. And then at some stage it will stop talking to you, because it will have used up all its capability to argue with you, so to speak. From a compliance perspective, if you are using AI – a complicated AI or a simple AI like ChatGPT – to create a marketing document, that’s OK.


How to Get Your Failing Data Governance Initiatives Back on Track

Data governance is a big lift. Organizations might make the mistake of attempting to roll the initiative out across the entire enterprise without building in the steps to get there. “If you make it too broad and end up not focusing on short-term goals that you can demonstrate to keep the funding going, these engagements [tend] to fail,” says Prasad. Organizational issues are some of the major stumbling blocks standing in the way of successful data governance, but there can also be technical obstacles. Reiter points to the importance of leveraging automation. If an enterprise team attempts to manually undertake data governance mapping, it could be irrelevant by the time it is completed. ... Documentation, or lack thereof, can be a good indicator of a data governance initiatives' progress and sustainability. “As things are changing over time and documentation isn’t updated, that's a great sign that governance is not maintainable,” Holiat says. Getting feedback from end users can alert data governance leaders to issues standing in the way of adoption. Are people throughout the organization frustrated with the data governance program? Does it facilitate their access to data, or is it making their jobs more difficult?


Adopting AI with Eyes Wide Open

For businesses in general, AI can increase efficiency, make the workplace safer, improve customer service, create competitive advantage and lead to new business models and revenue streams. But like any technological innovation, AI has its risks and challenges. At the heart of AI is code and data; code can (and often does) contain bugs, and data can (and often does) contain anomalies. But that is no different to the technological innovations that we have embraced to-date. Arguably, the risks and challenges of AI are greater – not least of all because of the potential breadth of its application – and they include (but are certainly not limited to): overreliance, lack of transparency, ethical concerns, security, and regulatory and statutory challenges which typically lag behind the pace of progress. So, what does have this to do with strategy and architecture, and in particular digital transformation? Too often in organizations, new technologies are rushed in, in the belief that there is no time to lose. Before you know it, the funds and resources have been found to embark on an initiative (programme or project) to adopt it, spearheading the way to the future. It is the future! 



Quote for the day:

"I find that the harder I work, the more luck I seem to have." -- Thomas Jefferson