Daily Tech Digest - August 05, 2024

Faceoff: Auditable AI Versus the AI Blackbox Problem

“The notion of auditable AI extends beyond the principles of responsible AI, which focuses on making AI systems robust, explainable, ethical, and efficient. While these principles are essential, auditable AI goes a step further by providing the necessary documentation and records to facilitate regulatory reviews and build confidence among stakeholders, including customers, partners, and the general public,” says Adnan Masood ... “There are two sides of auditing: the training data side, and the output side. The training data side includes where the data came from, the rights to use it, the outcomes, and whether the results can be traced back to show reasoning and correctness,” says Kevin Marcus. “The output side is trickier. Some algorithms, such as neural networks, are not explainable, and it is difficult to determine why a result is being produced. Other algorithms such as tree structures enable very clear traceability to show how a result is being produced,” Marcus adds. ... Developing explainable AI remains the holy grail and many an AI team is on a quest to find it. Until then, several efforts are underway to develop various ways to audit AI in order to have a stronger grip over its behavior and performance. 


A developer’s guide to the headless data architecture

We call it a “headless” data architecture because of its similarity to a “headless server,” where you have to use your own monitor and keyboard to log in. If you want to process or query your data in a headless data architecture, you will have to bring your own processing or querying “head” and plug it into the data — for example, Trino, Presto, Apache Flink, or Apache Spark. A headless data architecture can encompass multiple data formats, with data streams and tables as the two most common. Streams provide low-latency access to incremental data, while tables provide efficient bulk-query capabilities. Together, they give you the flexibility to choose the format that is most suitable for your use cases, whether it’s operational, analytical, or somewhere in between. ... Many businesses today are building their own headless data architectures, even if they’re not quite calling it that yet, though using cloud services tends to be the easiest and most popular way to get started. If you’re building your own headless data architecture, it’s important to first create well-organized and schematized data streams, before populating them into Apache Iceberg tables.


The Hidden Costs of the Cloud Skills Gap

Properly managing and scaling cloud resources requires expertise in load balancing, auto-scaling, and cost optimization. Without these skills, companies may face inefficiencies, either by over-provisioning or under-utilizing resources. Inexperienced or overstretched staff might struggle with performance optimization, resulting in slower applications and services, which can negatively impact user satisfaction and harm the company's reputation. ... Employees lacking the necessary skills to fully leverage cloud technologies may be less likely to propose innovative solutions or improvements, potentially leading to a lack of new product development and stagnation in business growth. The cloud presents abundant opportunities for innovation, including AI, machine learning, and advanced data analytics. Companies without the expertise to implement these technologies risk missing out on significant competitive advantages and exciting new discoveries. The bottom line is that skilled professionals often drive the adoption of new technologies because they have the knowledge to experiment in the field.


Architectural Retrospectives: The Key to Getting Better at Architecting

The traditional architectural review, especially if conducted by outside parties, often turns into a blame-assignment exercise. The whole point of regular architectural reviews in the MVA approach is to learn from experience so that catastrophic failures never occur. ... The mechanics of running an architectural retrospective session are identical to those of running a Sprint Retrospective in Scrum. In fact, an architectural focus can be added to a more general-purpose retrospective to avoid creating yet another meeting, so long as all the participants are involved in making architectural decisions. This can also be an opportunity to demonstrate that anyone can make an architectural decision, not only the "architects." ... Many teams skip retrospectives because they don’t like to confront their shortcomings, Architectural retrospectives are even more challenging because they examine not just the way the team works, but the way the team makes decisions. But architectural retros have great pay-offs: they can uncover unspoken assumptions and hidden biases that prevent the team from making better decisions. If you retrospect on the way that you create your architecture, you will get better at architecting.


Design flaw has Microsoft Authenticator overwriting MFA accounts, locking users out

Microsoft confirmed the issue but said it was a feature not a bug, and that it was the fault of users or companies that use the app for authentication. Microsoft issued two written statements to CSO Online but declined an interview. Its first statement read: “We can confirm that our authenticator app is functioning as intended. When users scan a QR code, they will receive a message prompt that asks for confirmation before proceeding with any action that might overwrite their account settings. This ensures that users are fully aware of the changes they are making.” One problem with that first statement is that it does not correctly reflect what the message says. The message says: “This action will overwrite existing security information for your account. To prevent being locked out of your account, continue only if you initiated this action from a trusted source.” The first sentence of the warning window is correct, in that the action will indeed overwrite the account. But the second sentence incorrectly tells the user to proceed as long as two conditions are met: that the user initiated the action; and that it is a trusted source.


Automation Resilience: The Hidden Lesson of the CrowdStrike Debacle

Automated updates are nothing new, of course. Antivirus software has included such automation since the early days of the Web, and our computers are all safer for it. Today, such updates are commonplace – on computers, handheld devices, and in the cloud. Such automations, however, aren’t intelligent. They generally perform basic checks to ensure that they apply the update correctly. But they don’t check to see if the update performs properly after deployment, and they certainly have no way of rolling back a problematic update. If the CrowdStrike automated update process had checked to see if the update worked properly and rolled it back once it had discovered the problem, then we wouldn’t be where we are today. ... The good news: there is a technology that has been getting a lot of press recently that just might fit the bill: intelligent agents. Intelligent agents are AI-driven programs that work and learn autonomously, doing their good deeds independently of other software in their environment. As with other AI applications, intelligent agents learn as they go. Humans establish success and failure conditions for the agents and then feed back their results into their models so that they learn how to achieve successes and avoid failures.


Is HIPAA enough to protect patient privacy in the digital era?

HIPAA requires covered entities to establish strong data privacy policies, but it doesn’t regulate cybersecurity standards. HIPAA was deliberately designed to be tech agnostic, on the basis that this would keep it relevant despite frequent technology changes. But this could be a glaring omission. For example, Change Healthcare, a medical insurance claims clearinghouse, experienced a data breach when a hacker used stolen credentials to enter the network. If Change had implemented multi-factor authentication (MFA), a basic cybersecurity measure, the breach might not have taken place. But MFA isn’t specified in the HIPAA Security Rule, which was passed 20 years ago. Cybersecurity in the healthcare industry falls through the cracks of other regulations. The CISA update in early 2024 requires companies in critical infrastructure industries to report cyber incidents within 72 hours of discovery. ... “Crucially, there are many third-parties in the healthcare ecosystem that our members contract with who would not be considered ‘covered entities’ under this proposal, and therefore, would not be obligated to share or disclose that there had been a substantial cyber incident – or any cyber incident at all,” warns Russell Branzell, president and CEO of CHIME.


The downtime dilemma: Why organizations hesitate to switch IT infrastructure providers

Making a switch is not always an easy decision. So, how can a business be sure it’s doing the right thing? There are four boxes that a business should look for its IT infrastructure provider to tick before contemplating a move. Firstly, is the provider there when needed? Reliable round the clock customer support is crucial for addressing any issues that arise before, during, and after a switch. For businesses with small IT departments or limited resources, this external support offers reliable infrastructure management without needing an extensive in-house team. Next, does the provider offer high uptime guarantees and Service Level Agreements (SLAs) outlining compensation for downtime? By prioritizing service providers with Uptime Institute’s tier 4 classification, businesses are opting for a partner that’s certified as fully fault-tolerant, highly resilient, and guaranteeing an uptime of 99.9 percent. This protects the business’ crucial IT systems, keeping them operational despite disruptive activity such as a cyberattack, failing components, or unexpected outages. 


Inside CIOs’ response to the CrowdStrike outage — and the lessons they learned

The first thing Alli did was gather the incident response team to assess the situation and establish the company’s immediate response plan. “We had to ensure that we could maintain business continuity while we addressed the implications of the outage,’’ Alli says. Communication was vital and Alli kept leadership and stakeholders informed about the situation and the steps IT was taking with regular updates. “It’s easy to panic in these situations, but we focused on being transparent and calm, which helped to keep the team grounded,’’ Alli says. Additionally, “The lack of access to critical security insights put us at risk temporarily, but more importantly, it highlighted vulnerabilities in our overall security posture. We had to quickly shift some of our security protocols and rely on other measures, which was a reminder of the importance of having a robust backup plan and redundancies in place,’’ Alli says. Mainiero agrees, saying that in this type of situation, “you have to take on a persona — if you’re panicked, your teams are going to panic.” He says that training has taught him never to raise his voice.


SASE: This Time It’s Personal

Working patterns are changing fast. Millennials and GenZs – the first true digital generation – no longer expect to go to the same place every day. Just as the web broke the link between bricks and mortar and shopping, we are now seeing the disintermediation of the workplace, which is anywhere and everywhere. The trend was accelerated by the pandemic, but it’s a mistake to believe that the pandemic created hybrid working. So, while SASE makes the right assumptions about the need to integrate networking and security, it doesn't go far enough. The networking and security stack is still office-bound and centralized. If you were designing this from the ground up, you wouldn't start from here. A more radical approach, what we call personal SASE, is to left-shift the networking and security stack all the way to the user edge. Think of it like the transition from the mainframe to the minicomputer to the PC in the early 1980s, a rapid migration of compute power to the end user. Personal SASE involves a similar architectural shift with commensurate productivity gains for the modern hybrid workforce, who expect but rarely get the same level of network performance and seamless security that they currently experience when they step into the office.



Quote for the day:

"If you really want the key to success, start by doing the opposite of what everyone else is doing." -- Brad Szollose

Daily Tech Digest - August 04, 2024

Are we prepared for ‘Act 2’ of gen AI?

It’s both logical and tempting to design your AI usage around one large model. You might think you can simply take a giant large language model (LLM) from your Act 1 initiatives and just get moving. However, the better approach is to assemble and integrate a mixture of several models. Just as a human’s frontal cortex handles logic and reasoning while the limbic system deals with fast, spontaneous responses, a good AI system brings together multiple models in a heterogeneous architecture. No two LLMs are alike — and no single model can “do it all.” What’s more, there are cost considerations. The most accurate model might be more expensive and slower. For instance, a faster model might produce a concise answer in one second — something ideal for a chatbot. ... Even in its early days, gen AI quickly presented scenarios and demonstrations that underscore the critical importance of standards and practices that emphasize ethics and responsible use. Gen AI should take a people-centric approach that prioritizes education and integrity by detecting and preventing harmful or inappropriate content — in both user input and model output. For example, invisible watermarks can help reduce the spread of disinformation.


Supercharge AIOps Efficiency With LLMs

One of the superpowers LLMs bring to the table is ultra-efficient summarization. Given a dense information block, generative AI models can extract the main points and actionable insights. Like with our earlier trials in algorithmic root cause analysis, we gathered all the data we could surrounding an observed issue, converted it into text-based prompts, and fed it to an LLM along with guidance on how it should summarize and prioritize the data. Then, the LLM was able to leverage its broad training and newfound context to summarize the issues and hypothesize about root causes. Constricting the scope of the prompt by providing the LLM the information and context it needs — and nothing more — we were able to prevent hallucinations and extract valuable insights from the model. ... Another potential application of LLMs is automatically generating post-mortem reports after incidents. Documenting issues and resolutions is not only a best practice but also sometimes a compliance requirement. Rather than scheduling multiple meetings with different SREs, Developers, and DevOps to collect information, could LLMs extract the necessary information from the Senser platform and generate reports automatically?


“AI toothbrushes” are coming for your teeth—and your data

So-called "AI toothbrushes" have become more common since debuting in 2017. Numerous brands now market AI capabilities for toothbrushes with three-figure price tags. But there's limited scientific evidence that AI algorithms help oral health, and companies are becoming more interested in using tech-laden toothbrushes to source user data. ... Tech-enabled toothbrushes bring privacy concerns to a product that has historically had zero privacy implications. But with AI toothbrushes, users are suddenly subject to a company's privacy policy around data and are also potentially contributing to a corporation's marketing, R&D, and/or sales tactics. Privacy policies from toothbrush brands Colgate-Palmolive, Oral-B, Oclean, and Philips all say the companies' apps may gather personal data, which may be used for advertising and could be shared with third parties, including ad tech companies and others that may also use the data for advertising. These companies' policies say users can opt out of sharing data with third parties or targeted advertising, but it's likely that many users overlook the importance of reading privacy policies for a toothbrush.


4 Strategies for Banks and Their Tech Partners that Save Money and Angst

When it comes to technological alignment between banks and tech partners, it’s about more than ensuring tech stacks are compatible. Cultural alignment on work styles, development cycles and more go into making things work. Both partners should be up front about their expectations. For example, banking institutions have more regulatory and administrative hurdles to jump through than technology companies. While veteran fintech companies will be aware and prepared to move in a more conservative way, early-stage technology companies may be quicker to move and work in more unconventional ways. Prioritization of projects on both ends should always be noted in order to set realistic expectations. For example, tech firms typically have a large pipeline of onboarding ahead. And the financial institution typically has limited tech resources to allocate towards project management. ... Finally, when tech firms and financial institutions work together, a strong dose of reality helps. View upfront costs as a foundation for future returns. Community banking and credit union leaders should focus on the potential benefits and value generation expected three to five years after the project begins.


US Army moves closer to fielding next-gen biometrics collection

Specifically designed to be the Army’s forward biometrics collection and matching system, NXGBCC has been designed to support access control, identify persons of interest, and to provide biometric identities to detainee and intelligence systems. NXGBCC collects, matches, and stores biometric identities and is comprised of three components: a mobile collection kit, static collection kit, and a local trusted source. ... The Army said “NXGBCC will add to the number of biometric modalities collected, provide matches to the warfighter in less than three minutes, increase the data sharing capability, and reduce weight, power, and cost.” NXGBCC will use a Local Trusted Source that is composed of a distributed database that’s capable of being used worldwide, data management software, forward biometric matching software, and an analysis portal. Also, NXGBCC collection kit(s) will be composed of one or more collection devices, a credential/badge device, and document scanning device. The NXGBCC system employs an integrated system of commercial-off-the-shelf hardware and software that is intended to ensure the end-to-end data flow that’s required to support different technical landscapes during multiple types of operational missions.


The Future of AI: Edge Computing and Qualcomm’s Vision

AI models are becoming increasingly powerful while also getting smaller and more efficient. This advancement enables them to run on edge devices without compromising performance. For instance, Qualcomm’s latest chips are designed to handle large language models and other AI tasks efficiently. These chips are not only powerful but also energy-efficient, making them ideal for mobile devices. One notable example is the Galaxy S24 Ultra, which is equipped with Qualcomm’s Snapdragon 8 Gen 3 chip. This device can perform various AI tasks locally, from live translation of phone calls to AI-assisted photography. Features like live translation and chat assistance, which include tone adjustment, spell check, and translation, run directly on the device, showcasing the potential of edge computing. ... The AI community is also contributing to this trend by developing open-source models that are smaller yet powerful. Innovations like the Mixture of Agents, which allows multiple small AI agents to collaborate on tasks, and Route LLM, which orchestrates which model should handle specific tasks, are making AI more efficient and accessible. 


Software Supply Chain Security: Are You Importing Problems?

In a sense, Software Supply Chain as a strategy, just like Zero Trust, cannot be bought off-the-shelf. It requires a combination of careful planning, changing the business processes, improving communications with your suppliers and customers and, of course, a substantial change in regulations. We are already seeing the first laws introducing stronger punishment for organizations involved in critical infrastructure, with their management facing jail time for heavy violations. Well, perhaps the very definition of “critical” must be revised to include operating systems, public cloud infrastructures, and cybersecurity platforms, considering the potential global impact of these tools on our society.  ... To his practical advice I can only add another bit of philosophical musing: security is impossible without trust, but too much trust is even more dangerous than too little security. Start utilizing the Zero Trust approach for every relationship with a supplier. This can be understood in various ways: from not taking any marketing claim at its face value and always seeking a neutral 3rd party opinion to very strict and formal measures like requiring a high Evaluation Assurance Level of the Common Criteria (ISO 15408) for each IT service or product you deploy.


A CISO’s Observations on Today’s Rapidly Evolving Cybersecurity Landscape

Simply being aware of risks isn’t sufficient. But, role-relevant security simulations will empower the entire workforce to know what to do and how to act when they encounter malicious activity. ... Security should be a smooth process, but it is often complicated. Recall the surge in phishing attacks: employees know not to click dubious links from unknown senders, but do they know how to verify if a link is safe or unsafe beyond their gut instinct? Is the employee aware that there is an official email verification tool? Do they even know how to use it? ... It is not uncommon for business leaders to rush technology adoption, delaying security until later as an added feature bolted on afterward. When companies prioritize speed and scalability at the expense of security, data becomes more mobile and susceptible to attack, making it more difficult for security teams to ascertain the natural limitation of a blast radius. Businesses may also end up in security debt. ... Technology continues to evolve at breakneck speed, and organizations must adapt their security strategy appropriately. As such, businesses should adopt a multifaceted, agile, and ever-evolving cybersecurity approach to managing risks.


Future AI Progress Might Not Be Linear. Policymakers Should Be Prepared.

Policymakers and their advisors can act today to address that risk. Firstly, though it might be politically tempting, they should be mindful of overstating the likely progress and impact of current AI paradigms and systems. Linear extrapolations and quickfire predictions make for effective short-term political communication, but they carry substantial risk: If the next generation of language models is, in fact, not all that useful for bioterrorism; if they are not readily adopted to make discriminatory institutional decisions; or if LLM agents do not arrive in a few years, but we reach slowing progress or a momentary plateau instead, policymakers and the public will take note – and be skeptical of warnings in the future. If nonlinear progress is a realistic option, then policy advocacy on AI should proactively consider it: hedge on future predictions, conscientiously name the possibility of plateaus, and adjust policy proposals accordingly. Secondly, the prospect of plateaus makes reactive and narrow policy-making much more difficult. Their risk is instead best addressed by focusing on building up capacity: equip regulators and enforcement with the expertise, access and tools they need to monitor the state of the field.


Building the data center of the future: Five considerations for IT leaders

Disparate centers of data are, in turn, attracting more data, leading to Data Gravity. Localization needs and a Hybrid IT infrastructure are creating problems related to data interconnection. Complex systems require an abstraction layer to move data around to fulfill fast-changing computing needs. IT needs interconnection between workflow participants, applications, multiple clouds, and ecosystems, all from a single interface, without getting bogged down by the complexity wall. ... Increasing global decarbonization requirements means data centers must address ‌energy consumption caused by high-density computing. ... Global variations in data handling and privacy legislation require that data remain restricted to specific geographical regions. Such laws aren't the only drivers for data localization. The increasing use of AI at the edge, the source of the data, is driving demand for low-latency operations, which in turn requires localized data storage and processing. Concerns about proprietary algorithms being stored in the public cloud are also leading companies to move to a Hybrid IT infrastructure that can harness the best of all worlds.



Quote for the day:

"Perseverance is failing 19 times and succeding the 20th." -- Julie Andrews

Daily Tech Digest - Aug 03, 2024

Solving the tech debt problem while staying competitive and secure

Technical debt often stems from the costs of running and maintaining legacy technology services, especially older applications. It typically arises when organizations make short-term sacrifices or use quick fixes to address immediate needs without ever returning to resolve those temporary solutions. For CIOs, balancing technical debt with other strategic priorities is a constant challenge. They must decide whether to invest resources in high-profile areas like AI and security or to prioritize reducing technical debt. ... CIOs should invest in robust cybersecurity measures, including advanced threat detection, response capabilities, and employee training. Maintaining software updates and implementing multifactor authentication (MFA) and encryption will further strengthen an organization’s defenses. However, technical debt can significantly undermine these cybersecurity efforts. Legacy systems and outdated software can have vulnerabilities waiting to be exploited. Additionally, technical debt is often represented by multiple, disparate tools acquired over time, which can hinder the implementation of a cohesive security strategy and increase cybersecurity risk.


How to Create a Data-Driven Culture for Your Business

With businesses collecting more data than ever, for data analysts it can be more like scrounging through the bins than panning for gold. “Hiring data scientists is outside the reach of most organizations but that doesn't mean you can’t use the expertise of an AI agent,” Callens says. Once a business has a handle on which metrics really matter, the rest falls into place, organizations can define objectives and then optimize data sources. As the quality of the data improves the decisions are better informed and the outcomes can be monitored more effectively. Rather than each decision acting in isolation it becomes a positive feedback loop where data and decisions are inextricably linked: At that point the organization is truly data driven. Subramanian explains that changing the culture to become more data-driven requires top-down focus. When making decisions stakeholders should be asked to provide data justification for their choices and managers should be asked to track and report on data metrics in their organizations. “Have you established tracking of historical data metrics and some trend analysis?” she says. “Prioritizing data in decision making will help drive a more data-driven culture.”


How Prompt Engineering Can Support Successful AI Projects

Central to the technology is the concept of foundation models, which are rapidly broadening the functionality of AI. While earlier AI platforms were trained on specific data sets to produce a focused but limited output, the new approach throws the doors wide open. In simple — and somewhat unsettling — terms, a foundation model can learn new tricks from unrelated data. “What makes these new systems foundation models is that they, as the name suggests, can be the foundation for many applications of the AI model,” says IBM. “Using self-supervised learning and transfer learning, the model can apply information it’s learnt about one situation to another.” Given the massive amounts of data fed into AI models, it isn’t surprising that they need guidance to produce usable output. ... AI models benefit from clear parameters. One of the most basic is length. OpenAI offers some advice: “The targeted output length can be specified in terms of the count of words, sentences, paragraphs, bullet points, etc. Note however that instructing the model to generate a specific number of words does not work with high precision. The model can more reliably generate outputs with a specific number of paragraphs or bullet points.”


Effective Strategies To Strengthen Your API Security

To secure your organisation, you have to figure out where your APIs are, who’s using them and how they are being accessed. This information is important as API deployment increases your organisation’s attack surface making it more vulnerable to threats. The more exposed they are, the greater the chance a sneaky attacker might find a vulnerable spot in your system. Once you’ve pinpointed your APIs and have full visibility of potential points of access, you can start to include them in your vulnerability management processes. By proactively identifying vulnerabilities, you can take immediate action against potential threats. Skipping this step is like leaving the front door wide open. APIs give businesses the power to automate the process and boost operational efficiency. But here’s the thing: with great convenience comes potential vulnerabilities that malicious actors could exploit. If your APIs are internet-facing, then it’s important to put in place rate-limiting to control requests and enforce authentication for every API interaction. This helps take the guesswork out of who gets access to what data through your APIs. Another key measure is using the cryptographic signing of requests.


The Time is Now for Network-as-a-Service (NaaS)

As the world’s networking infrastructure has evolved, there is now far more private backbone bandwidth available. Like all cloud solutions, NaaS also benefits from significant ongoing price/performance improvements in commercial hardware. Combined with the growing number of carrier-neutral colocation facilities, NaaS providers simply have many more building blocks to assemble reliable, affordable, any-to-any connectivity for practically any location. The biggest changes derive from the advanced networking and security approaches that today’s NaaS solutions employ. Modern NaaS solutions fully disaggregate control and data planes, hosting control functions in the cloud. As a result, they benefit from practically unlimited (and inexpensive) cloud computing capacity to keep costs low, even as they maintain privacy and guaranteed performance. Even more importantly, the most sophisticated NaaS providers use novel metadata-based routing techniques and maintain end-to-end encryption. These providers have no visibility into enterprise traffic; all encryption/decryption happens only under the business’ direct control.


Criticality in Data Stream Processing and a Few Effective Approaches

With the advancement of stream processing engines like Apache Flink, Spark, etc., we can aggregate and process data streams in real time, as they handle low-latency data ingestion while supporting fault tolerance and data processing at scale. Finally, we can ingest the processed data into streaming databases like Apache Druid, RisingWave, and Apache Pinot for querying and analysis. Additionally, we can integrate visualization tools like Grafana, Superset, etc., for dashboards, graphs, and more. This is the overall high-level data streaming processing life cycle to derive business value and enhance decision-making capabilities from streams of data. Even with its strength and speed, stream processing has drawbacks of its own. A couple of them from a bird's eye view are confirming data consistency, scalability, maintaining fault-tolerance, managing event ordering, etc. Even though we have event/data stream ingestion frameworks like Kafka, processing engines like Spark, Flink, etc, and streaming databases like Druid, RisingWave, etc., we encounter a few other challenges if we drill down more


Understanding the Impact of AI on Cloud Spending and How to Harness AI for Enhanced Cloud Efficiency

The real magic happens when AI unlocks advanced capabilities in cloud services. By crunching real-time data, AI transforms how businesses operate, making them more agile and strategic in their approaches. Businesses can gain better scalability, run operations more efficiently, and make smarter, data-driven decisions – all thanks to AI. One of the biggest advantages of AI in the cloud is how it helps companies scale up smoothly. By using AI-driven solutions, businesses can predict future demands and optimise resource allocation accordingly. This means they can handle increased workloads without massive infrastructure overhauls, which is crucial for staying nimble and competitive. Scaling AI in cloud computing isn’t without its challenges, though. It requires strategic approaches like getting leadership buy-in, establishing clear ROI metrics, and using responsible AI algorithms. These steps ensure that AI integration not only scales operations but also does so efficiently and with minimal disruption. AI algorithms continuously monitor workload patterns and can make recommendations on adjusting resource allocations accordingly.


Blockchain Technology and Modern Banking Systems

“Zumo's innovative approach to integrating digital assets into traditional banking systems leverages APIs to simplify the process.” As Nick Jones explains, its Crypto Invest solution offers a digital asset custody and exchange service that can be seamlessly incorporated into a bank's existing IT infrastructure. “This provides consumer-facing retail banks with a compliance-focused route to offer their customers the option to invest in digital assets,” says Nick. By doing so, banks can generate new revenue streams, enabling customers to buy, hold and sell crypto within the familiar confines of their own banking platform. Recognising the regulatory and operational challenges faced by banks, Nick Jones believes in developing a sustainable and long-term approach, with a focus on delivering the necessary infrastructure. For banks to confidently integrate digital asset propositions into their business models, they must address the financial, operational and environmental sustainability of the project. Similarly, Kurt Wuckert highlights the feasibility of a hybrid approach for banks, where blockchain solutions are introduced gradually alongside existing systems. 


The transformation fallacy

Describing the migration process so far, Jordaan says that they started with some of the very critical systems. “One of which was the e-commerce system that runs 50 percent of our revenue,” he says. “That was significant, and provided scalability, because we could add more countries into it, and there are events such as airlines that cancel flights and so our customers would suddenly be looking for bookings.” After that, it was a long-running program of lifting and shifting workloads depending on their priority. The remaining data centers are either “just really complicated” to decommission, or are in the process of being shut down. By the end of next year, Jordaan expects TUI to have just one or two data centers. One of the more unique areas of TUI’s business from an IT perspective is that of the cruise ships. “Cruise ships actually have a whole data center on board,” Jordaan says. “It has completely separate networks for the onboard systems, navigation systems, and everything else, because you're in the middle of the sea. You need all the compute, storage, and networks to run from a data center.” These systems are being transformed, too. Ships are deploying satellite connectivity to bring greater Internet connectivity on board. 


AI and Design Thinking: The Dynamic Duo of Product Development

When designing products that incorporate generative AI, it may feel that you are tipping in the direction of being too technology-focused. You might be tempted to forego human intuition in order to develop products that embrace AI’s innovation. Or, you may have a more difficult time discerning what is meant to be human and what is meant to be purely technical, because AI is such a new and dynamic field that changes almost weekly. The human/machine duality is precisely why combining human-centric Design Thinking with the power of Generative AI is so effective for product development. Design Thinking isn’t merely a method; it’s a mindset focusing on user needs, iterative learning, and cross-functional teamwork—all of which are essential for pioneering AI-driven products. ... One might say that focusing on a solution to a problem, instead of the problem itself, is quite an empathetic way to approach a problem. Empathy, a cornerstone of Design Thinking, allows developers to understand their users deeply. ... While AI is a powerful tool, it’s crucial to maintain ethical standards and monitor for biases. Generative AI should not be considered a replacement for human ethics and critical thinking. Instead, use it as a collaborative component for enhancing creativity and efficiency.



Quote for the day:

"The litmus test for our success as Leaders is not how many people we are leading, but how many we are transforming into leaders" -- Kayode Fayemi

Daily Tech Digest - August 02, 2024

Small language models and open source are transforming AI

From an enterprise perspective, the advantages of embracing SLMs are multifaceted. These models allow businesses to scale their AI deployments cost-effectively, an essential consideration for startups and midsize enterprises that need to maximize their technology investments. Enhanced agility becomes a tangible benefit as shorter deployment times and easier customization align AI capabilities more closely with evolving business needs. Data privacy and sovereignty (perennial concerns in the enterprise world) are better addressed with SLMs hosted on-premises or within private clouds. This approach satisfies regulatory and compliance requirements while maintaining robust security. Additionally, the reduced energy consumption of SLMs supports corporate sustainability initiatives. That’s still important, right? The pivot to smaller language models, bolstered by open source innovation, reshapes how enterprises approach AI. By mitigating the cost and complexity of large generative AI systems, SLMs offer a viable, efficient, and customizable path forward. This shift enhances the business value of AI investments and supports sustainable and scalable growth. 


The Impact and Future of AI in Financial Services

Winston noted that AI systems require vast amounts of data, which raises concerns about data privacy and security. “Financial institutions must ensure compliance with regulations such as GDPR [General Data Protection Regulation] and CCPA [California Consumer Privacy Act] while safeguarding sensitive customer information,” he explained. Simply using general GenAI tools as a quick fix isn’t enough. “Financial services will need a solution built specifically for the industry and leverages deep data related to how the entire industry works,” said Kevin Green, COO of Hapax, a banking AI platform. “It’s easy for general GenAI tools to identify what changes are made to regulations, but if it does not understand how those changes impact an institution, it’s simply just an alert.” According to Green, the next wave of GenAI technologies should go beyond mere alerts; they must explain how regulatory changes affect institutions and outline actionable steps. As AI technology evolves, several emerging technologies could significantly transform the financial services industry. Ludwig pointed out that quantum computers, which can solve complex problems much faster than traditional computers, might revolutionize risk management, portfolio optimization, and fraud detection. 


Is Your Data AI-Ready?

Without proper data contextualization, AI systems may make incorrect assumptions or draw erroneous conclusions, undermining the reliability and value of the insights they generate. To avoid such pitfalls, focus on categorizing and classifying your data with the necessary metadata, such as timestamps, location information, document classification, and other relevant contextual details. This will enable your AI to properly understand the context of the data and generate meaningful, actionable insights. Additionally, integrating complementary data can significantly enhance the information’s value, depth, and usefulness for your AI systems to analyze. ... Although older data may be necessary for compliance or historical purposes, it may not be relevant or useful for your AI initiatives. Outdated information can burden your storage systems and compromise the validity of the AI-generated insights. Imagine an AI system analyzing a decade-old market report to inform critical business decisions—the insights would likely be outdated and misleading. That’s why establishing and implementing robust retention and archiving policies as part of your information life cycle management is critical. 


Generative AI: Good Or Bad News For Software

There are plenty of examples of breaches that started thanks to someone copying over code and not checking it thoroughly. Think back to the Heartbleed exploit, a security bug in a popular library that led to the exposure of hundreds of thousands of websites, servers and other devices which used the code. Because the library was so widely used, the thought was, of course, someone had checked it for vulnerabilities. But instead, the vulnerability persisted for years, quietly used by attackers to exploit vulnerable systems. This is the darker side to ChatGPT; attackers also have access to the tool. While OpenAI has built some safeguards to prevent it from answering questions regarding problematic subjects like code injection, the CyberArk Labs team has already uncovered some ways in which the tool could be used for malicious reasons. Breaches have occurred due to blindly incorporating code without thorough verification. Attackers can exploit ChatGPT, using its capabilities to create polymorphic malware or produce malicious code more rapidly. Even with safeguards, developers must exercise caution. ChatGPT generates the code, but developers are accountable for it


FinOps Can Turn IT Cost Centers Into a Value Driver

Once FinOps has been successfully implemented within an organization, teams can begin to automate the practice while building a culture of continuous improvement. Leaders can now better forecast and plan, leading to more precise budgeting. Additionally, GenAI can provide unique insights into seasonality. For example, if a resource demand spikes every three days at other unpredictable frequencies, AI can help you detect these patterns so you can optimize by scaling up when required and back down to save costs during lulls in demand. This kind of pattern detection is difficult without AI. It all goes back to the concept of understanding value and total cost. With FinOps, IT leaders can demonstrate exactly what they spend on and why. They can point out how the budget for software licenses and labor is directly tied to IT operations outcomes, translating into greater resiliency and higher customer satisfaction. They can prove that they’ve spent money responsibly and that they should retain that level of funding because it makes the business run better. FinOps and AI advancements allow businesses to do more and go further than they ever could. Almost 65% of CFOs are integrating AI into their strategy. 


The convergence of human and machine in transforming business

To achieve a true collaboration between humans and machines, it is crucial to establish a clear understanding and definition of their respective roles. By emphasizing the unique strengths of AI while strategically addressing its limitations, organizations can create a synergy that maximizes the potential of both human expertise and machine capabilities. AI excels in data structuring, capable of transforming complex, unstructured information into easily searchable and accessible content. This makes it an invaluable tool for sorting through vast online datasets, including datasets, news articles, academic reports and other forms of digital content, extracting meaningful insights. Moreover, AI systems operate tirelessly, functioning 24/7 without the need for breaks or downtime. This "always on" nature ensures a constant state of productivity and responsiveness, enabling organizations to keep pace with the rapidly changing market. Another key strength of AI lies in its scalability. As data volumes continue to grow and the complexity of tasks increases, AI can be integrated into existing workflows and systems, allowing businesses to process and analyze vast amounts of information efficiently.


The Crucial Role of Real-time Analytics in Modern SOCs

Security analysts often spend considerable time manually correlating diverse data sources to understand the context of specific alerts. This process leads to inefficiency, as they must scan various sources, determine if an alert is genuine or a false positive, assess its priority, and evaluate its potential impact on the organization. This tedious and lengthy process can lead to analyst burnout, negatively impacting SOC performance. ... Traditional Security Information and Event Management (SIEM) systems in SOCs struggle to effectively track and analyze sophisticated cybersecurity threats. These legacy systems often burden SOC teams with false positives and negatives. Their generalized approach to analytics can create vulnerabilities and strain SOC resources, requiring additional staff to address even a single false positive. In contrast, real-time analytics or analytics-driven SIEMs offer superior context for security alerts, sending only genuine threats to security teams. ... Staying ahead of potential threats is crucial for organizations in today's landscape. Real-time threat intelligence plays a vital role in proactively detecting threats. Through continuous monitoring of various threat vectors, it can identify and stop suspicious activities or anomalies before they cause harm.


Architecting with AI

Every project is different, and understanding the differences between projects is all about context. Do we have documentation of thousands of corporate IT projects that we would need to train an AI to understand context? Some of that documentation probably exists, but it's almost all proprietary. Even that's optimistic; a lot of the documentation we would need was never captured and may never have been expressed. Another issue in software design is breaking larger tasks up into smaller components. That may be the biggest theme of the history of software design. AI is already useful for refactoring source code. But the issues change when we consider AI as a component of a software system. The code used to implement AI is usually surprisingly small — that's not an issue. However, take a step back and ask why we want software to be composed of small, modular components. Small isn't "good" in and of itself. ... Small components reduce risk: it's easier to understand an individual class or microservice than a multi-million line monolith. There's a well-known paper(link is external) that shows a small box, representing a model. The box is surrounded by many other boxes that represent other software components: data pipelines, storage, user interfaces, you name it. 


Hungry for resources, AI redefines the data center calculus

With data centers near capacity in the US, there’s a critical need for organizations to consider hardware upgrades, he adds. The shortage is exacerbated because AI and machine learning workloads will require modern hardware. “Modern hardware provides enhanced performance, reliability, and security features, crucial for maintaining a competitive edge and ensuring data integrity,” Warman says. “High-performance hardware can support more workloads in less space, addressing the capacity constraints faced by many data centers.” The demands of AI make for a compelling reason to consider hardware upgrades, adds Rob Clark, president and CTO at AI tool provider Seekr. Organizations considering new hardware should pull the trigger based on factors beyond space considerations, such as price and performance, new features, and the age of existing hardware, he says. Older GPUs are a prime target for replacement in the AI era, as memory per card and performance per chip increases, Clark adds. “It is more efficient to have fewer, larger cards processing AI workloads,” he says. While AI is driving the demand for data center expansion and hardware upgrades, it can also be part of the solution, says Timothy Bates, a professor in the University of Michigan College of Innovation and Technology. 


How to Bake Security into Platform Engineering

A key challenge for platform engineers is modernizing legacy applications, which include security holes. “Platform engineers and CIOs have a responsibility to modernize by bridging the gap between the old and new and understanding the security implications between the old and new,” he says. When securing the software development lifecycle, organizations should secure both continuous integration and continuous delivery/continuous deployment pipelines as well as the software supply chain, Mercer says. Securing applications entails “integrating security into the CI/CD pipelines in a seamless manner that does not create unnecessary friction for developers,” he says. In addition, organizations must prioritize educating employees on how to secure applications and software supply chains. ... As part of baking security into the software development process, security responsibility shifts from the cybersecurity team to the development organization. That means security becomes as much a part of deliverables as quality or safety, Montenegro says. “We see an increasing number of organizations adopting a security mindset within their engineering teams where the responsibility for product security lies with engineering, not the security team,” he says.



Quote for the day:

“If you really want to do something, you will work hard for it.” -- Edmund Hillary

Daily Tech Digest - August 01, 2024

These are the skills you need to get hired in tech

While soft skills are important, communicating them to a prospective employer can present a conundrum. Tina Wang, division vice president of human resources at ADP, said there are a few ways for job seekers to bring attention to their behavioral skills. It goes beyond just listing “strong work ethic” or “problem solving” on a resume, “though it’s good to add it there too,” she said. Job seekers can incorporate behavior skills in a track record of job experiences. ... An interview with a prospective employer is also a good time to introduce behavioral skills, but time is limited and job-seekers won’t likely be able to share all their demonstrated skills and experience. “Preparation will go a long way, so think through your talking points and what is important to share,” Wang said. “Think about a few applicable, real work experiences where you demonstrated these skills and sketch out how and when to bring them during the interview process.” References can also be an excellent way to highlight behavioral skills. Intangibles such as a strong work ethic or attention to detail might be something former managers, team members or peers identify. 


Ideal authentication solution boils down to using best tools to stop attacks

Given the shifting nature of work, with more employees working remotely, the variety of gaps in protection is manifold. Clunky authentication experiences mean users are often asked to sign in multiple times a day for different applications and accounts. “Users get extremely frustrated when this occurs, and they end up having resistance to adopting these authentication methods,” Anderson says. To improve the situation, organizations need to manage authentication scenarios in onboarding, session tokens to remember login – and the reality of username and password authentication still being used extensively throughout the security landscape, leaving vulnerabilities to fraud. “Passkeys are good for users because they simplify and streamline the actual authentication ceremony itself, where the user is actively involved,” Miller says. “It doesn’t necessarily decrease the number of times they have to authenticate but it does make it simpler and less taxing.” “They also have knock-on benefits of reducing the amount of information that leaks in the case of a database leak that can be used by an attacker. It shrinks the blast radius of account compromise.”


Should Today’s Developers Be More or Less Specialized?

“The need for specialists is not going to change. If anything, I expect it to increase,” says Hillion. “We still have a number of clients who rely on full-stack developers. I would say the general trend is towards businesses needing more specialized developers who have the right combination of technical skillsets and sector knowledge to deliver what is needed into the complex tech stack. There is significant demand for developers who specialize in particular industry sectors.” ... “Without basic knowledge, pursuing any specific development area is challenging,” says Ivanov. “That’s why starting by mastering basic technologies that someone is most proficient in, which helps them learn new things faster,” says Ivanov in an email interview. “However, core technologies should not be the end goal. It is also essential to stay up to date with technology trends and always continue using new technology.” Tasks that go beyond standard or general requirements need the involvement of specialists who have knowledge and experience in specific areas. For example, a project that requires complex algorithms or specific technologies will require a specialist with a deep understanding of them.


Between sustainability and risk: why CIOs are considering small language models

“In LLMs, the bulk of the data work is done statistically and then IT trains the model on specific topics to correct errors, giving it targeted quality data,” he says. “SLMs cost much less and require less data, but, precisely for this reason, the statistical calculation is less effective and, therefore, very high-quality data is needed, with substantial work by data scientists. Otherwise, with generic data, the model risks producing many errors.” Furthermore, SLMs are so promising and interesting for companies that even big tech offers and advertises them, like Google’s Gemma and Microsoft’s Phi-3. For this reason, according to Esposito, governance remains fundamental, within a model that should remain a closed system. “An SLM is easier to manage and becomes an important asset for the company in order to extract added value from AI,” he says. “Otherwise, with large models and open systems, you have to agree to share strategic company information with Google, Microsoft, and OpenAI. This is why I prefer to work with a system integrator that can develop customizations and provide a closed system, for internal use. 


Why geographical diversity is critical to build effective and safe AI tools

Geographical diversity is critical as organizations look to develop AI tools that can be adopted worldwide, according to Andrea Phua, senior director of the national AI group and director of the digital economy office at Singapore's Ministry of Digital Development and Information (MDDI). ... "The use of Gen AI has brought a new dimension to cyber threats. As AI becomes more accessible and sophisticated, threat actors will also become better at exploiting it," said CSA's chief executive and Commissioner of Cybersecurity David Koh. "As it is, AI already poses a formidable challenge for governments around the world [and] cybersecurity professionals would know that we are merely scratching the surface of gen AI's potential, both for legitimate applications and malicious uses," Koh said. He pointed to reports of AI-generated content, including deepfakes in video clips and memes, that have been used to sow discord and influence the outcome of national elections. At the same time, there are new opportunities for AI to be tapped to enhance cyber resilience and defense, he said. 


Cloud Migration Regrets: Should You Repatriate?

With increasing pressure to cut costs, many CTOs and CIOs are considering repatriating cloud workloads back on premises. As hard as it may seem, it’s important to think beyond just the cost. You must understand workload requirements to make sound decisions for each application. ... A lot of organizations have forgotten how much IT operations have changed since moving to the cloud. Cloud transformation meant revamping ITOps based on the chosen mix of Infrastructure-, Platform- or Software-as-a-Service (IaaS, PaaS or SaaS) services. Bringing applications back on premises strips away those service layers, and Ops teams may no longer be able or willing to accept the administrative and maintenance burden again. One final consideration before moving workloads off the cloud is security. I think security is one of the many advantages of cloud infrastructure. When businesses first started moving to the cloud, security was one of the biggest concerns. It turns out that cloud providers are better at security than you are. They can’t fix security holes in your software or other operator error scenarios, but a cloud infrastructure provides greater isolation if a breach does occur. 


Chess, AI & future of leadership

As computing power increases and its access cost reduces, AI will become the central force that drives all activities, including imagination! So, imagine the chessboard being AI-enabled. The board now has its intelligence with the ability to understand the context of the game to prompt the next set of moves. The difference between the board-level AI and the AI used by the player as her assistant is that the assistant knows the player’s psyche of defending or attacking, strengths and weaknesses of the player and her opponent, and factors these while offering suggestions. The two AIs may or may not be aligned in their suggestions since both may be accessing different references. Let’s activate the third dimension in chess – the pieces are also intelligent! They know their roles and those of the others. They too can think, strategise, and suggest. For instance, in a choice to move between the rook and the knight, the rook suggests the knight moves. The knight feels the Queen should move! This is the egalitarian version of chess! Does it feel real and practical? In the context of AI, there’s the Large Language Model, which processes data from a vast set of sources with a large number of constraints and rules. 


DigiCert validation bug sets up 83,267 SSL certs for revoking

One of the validation methods approved by the Certification Authority Browser Forum (CABF), whose guidelines provide best practices for securing internet transactions in browsers and other software, involves the customer adding a DNS CNAME record that includes a random value supplied by its certificate provider. The provider, in this case DigiCert, then does a DNS lookup and verifies that the random value is as provided, confirming that the customer controls the domain. The CABF requires that, in one format of the DNS CNAME entry, the random value be prefixed with an underscore, and DigiCert discovered that, in some cases, that character was not included, rendering the validation non-compliant. By CABF rules, those certificates must be revoked within 24 hours, with no exceptions. However, DigiCert said in an update to its status page Tuesday, and in an email to customers, “Unfortunately, some customers operating critical infrastructure are not in a position to have all their certificates reissued and deployed in time without critical service interruptions. To avoid disruption to critical services, we have engaged with browser representatives alongside these customers over the last several hours. ...”


Mind the Gap: Data Quality Is Not “Fit for Purpose”

When talking about data quality, we must therefore be clear about whose purpose, what requirements, established when, and by whom. Within the context of the DMBoK definition, the answer is that every consumer evaluates the quality of a data set independently. Data is considered to be of high quality when it is fit for my purpose, satisfies my requirements, established by me when I need the data. Data quality, defined in this way, is truly in the eye of the beholder. Furthermore, data quality analyses cannot be leveraged by new consumers. For decades, we in decision support have been selling the benefits of leveraging data across applications and analyses. It has been the fundamental justification for data warehouses, data lakes, data lakehouses, etc. But misalignment between the purpose for which data was created and the purpose for which it is being used may not be immediately apparent. Especially when the data is not well understood. The consequences are faulty models and erroneous analyses. We reflexively blame the quality of the data, but that’s not where the problem lies. This is not data quality. It is data fitness. 


Navigating Hope and Fear in a Socio-Technical Future

It is not about just spending more, that isnt really working, you must SPEND BETTER. I and other architects litterally train for decades to both cut costs and make great investment decisions. Technical debt acrual, technical health goals, technical strategy dont just deserve a seat at the table. They are becoming the table. A little more rationally, in all complex engineering fields, we are required to get signoff from legitimate professionals who have been measured against legitimate and hard-earned competencies. Not only does this create more stable outcomes, it actually saves and makes the economy money. Instead of ‘paying for two ok systems’, we pay for ‘one great one’. ... In all complex engineering ecosystems it is not just outputs and companies that are regulated. The role and skills of architects and engineers are not secret and they really aren’t that different by company. I believe I am the worlds expert on architecture skills or at least one of a dozen of them. I have interviewed and assessed hundreds of companies, and thousands of architects. It is time to begin licensing. And it must be handed to a real professional society. It cannot be a vendor consortium. 



Quote for the day:

"You’ll never achieve real success unless you like what you’re doing." -- Dale Carnegie

Daily Tech Digest - July 31, 2024

Rise of Smart Testing: How AI is Revolutionizing Software Testing

Development teams must use new techniques when creating and testing applications. Although traditional frameworks frequently necessitate a great deal of manual labor for script construction and maintenance, test automation can greatly increase productivity and accuracy. This may restrict their efficacy and capacity to grow. ... Agile development is based on continuous improvement, but rapid code changes can put a burden on conventional test automation techniques. This is where self-healing test automation scripts come in. The release cycle is slowed down by test cases that become delicate and need ongoing maintenance. Frameworks with AI capabilities are able to recognize these changes and adjust accordingly. This translates into shorter release cycles, less maintenance overhead, and self-healing test scripts. ... Extensive test coverage is a difficult goal to accomplish using traditional testing techniques. Artificial intelligence (AI) fills this gap by automatically generating a wide range of test cases by evaluating requirements, code, and previous tests. This covers scenarios—both good and negative—that human testers might overlook or edge cases. 


What CISOs need to keep CEOs (and themselves) out of jail

Considering the changes in the Cyber Security Framework 2.0 (CSF 2.0) emphasizing governance and communication with the board of directors, Sullivan is right to assume that liability will not stop at the CISO and will likely move upwards. In his essay, Sullivan urges CEOs to give CISOs greater resources to do their jobs. But if he’s talking about funding to purchase more security controls, this might be a hard sell for CEOs. ... CEOs would benefit from showing that they care about cybersecurity and adding metrics to company reports to demonstrate it is a significant concern. For CISOs, agreeing to a set of metrics with the CEO would provide a visible North Star and a forcing function for aligning resources and headcount to ensure metrics continue to trend in the right direction. ... CEOs that are serious about cybersecurity must prioritize collaboration with their CISOs and putting them in the rotation for regular meetings. A healthy budget increase for tools may be necessary as AI injects many new risks, but it’s not sufficient nor is it the most important step. CISOs need better people and better processes to deliver on promises of keeping the enterprise safe. 


Who should own cloud costs?

The exponential growth of AI and generative AI initiatives are often identified as the true culprits. Although packed with potential, these advanced technologies consume extensive cloud resources, increasing costs that organizations often struggle to manage effectively. The main issues usually stem from a lack of visibility and control over these expenses. The problems go beyond just tossing around the term “finops” at meetings. It comes down to a fundamental understanding of who owns and controls cloud costs in the organization. Trying to identify cloud cost ownership and control often becomes a confusing free-for-all. ... Why does giving engineering control over cloud costs make such a difference? For one, engineers are typically closer to the actual usage and deployment of cloud resources. When they build something to run on the cloud, they are more aware of how applications and data storage systems use cloud resources. Engineers can quickly identify and rectify inefficiencies, ensuring that cloud resources are used cost-effectively. Moreover, engineers with skin in the game are more likely to align their projects with broader business goals, translating technical decisions into tangible business outcomes.


Generative AI and Observability – How to Know What Good Looks Like

In software development circles, observability is often defined as the combination of logs, traces and metric data to show how applications perform. In classic mechanical engineering and control theory, observability looks at the inputs and outputs for a system to judge how changes affect the results. In practice, looking at the initial requests and what gets returned provides data that can be used for judging performance. Alongside this, there is the quality of the output to consider as well. Did the result answer the user’s question, and how accurate was the answer? Were there any hallucinations in the response that would affect the user? And where did those results come from? Tracking AI hallucination rates across different LLMs and services shows up how those services perform, where the levels of inaccuracy vary from around 2.5 percent to 22.4 percent. All the steps involved around managing your data and generative AI app can affect the quality and speed of response at runtime. For example, retrieval augmented generation (RAG) allows you to find and deliver company data in the right format to the LLM so that this context can provide a more relevant response. 


Security platforms offer help to tame product complexity, but skepticism remains

The biggest issue enterprises cited was what they saw as an inherent contradiction between the notion of a platform, which to them had the connotation of a framework on which things were built, and the specialization of most offerings. “You can’t have five foundations for one building,” one CSO said sourly, and pointed out that there are platforms for network, cloud, data center, application, and probably even physical security. While there was an enterprise hope that platforms would somehow unify security, they actually seemed to divide it. ... It seems to me that divided security responsibility, arising from the lack of a single CSO in charge, is also a factor in the platform question. Vendors who sell into such an account not only have less incentive to promote a unifying security platform vision, they may have a direct motivation not to do that. Of 181 enterprises, 47 admit that their security portfolio was created, and is sustained, by two or more organizations, and every enterprise in this group is without a CSO. Who would a security platform provider call on in these situations? Would any of the organizations involved in security want to share their decision power with another group?


The cost of a data breach continues to escalate

The type of attack influenced the financial damage, the report noted. Destructive attacks, in which the bad actors delete data and destroy systems, cost the most: $5.68 million per breach ($5.23 million in 2023). Data exfiltration, in which data is stolen, and ransomware, in which data is encrypted and a ransom demanded, came second and third, at $5.21 million and $4.91 million respectively. However, noted Fritz Jean-Louis, principal cybersecurity advisor at Info-Tech Research Group, sometimes attackers combine their tactics. “Double extortion ransomware attacks are a key factor that is influencing the cost of data breaches,” he said in an email. “Since 2023, we have observed that ransomware attacks now include double extortion attacks ... “This risk of shadow data will become even more elevated in the AI era, with data serving as the foundation on which new AI-powered applications and use-cases are being built,” added Jennifer Kady, vice president, security at IBM. “Gaining control and visibility over shadow data from a security perspective has emerged as a top priority as companies move quickly to adopt generative AI, while also ensuring security and privacy are at the forefront.”


If You are Reachable, You Are Breachable, and Firewalls & VPNs are the Front Door

It’s about understanding that the network is no longer a castle to be fortified but a conduit only, with entity-to-entity access authorized discretely for every connection based on business policies informed by the identity and context of the entities connecting. Gone are IP-based policies and ACLs, persistent tunnels, trusted and untrusted zones, and implicit trust. With a zero-trust architecture in place, the internet becomes the corporate network and point-to-point networking fades in relevance over time. Firewalls become like the mainframe – serving a diminishing set of legacy functions – and no longer hindering the agility of a mobile and cloud-driven enterprise. This shift is not just a technical necessity but also a regulatory and compliance imperative. With government bodies mandating zero-trust models and new SEC regulations requiring breach reporting, warning shots have been fired. Cybersecurity is no longer just an IT issue; it has elevated to a boardroom priority, with far-reaching implications for business continuity and reputation. Many access control solutions have claimed to adopt zero-trust by adding dynamic trust. 


Indian construction industry leads digital transformation in Asia pacific

“While challenges like the increasing prices of raw materials and growing competition persist in the Indian market, its current strong economic state and steady outlook for the forthcoming years, as reported by the IMF, have provided a congenial atmosphere for businesses to evaluate and adopt newer technologies, and consequently lead the Asia Pacific market in terms of investments in transformational technologies. Indian businesses have aptly recognised this phase as the ideal time to leverage digital technologies to identify newer growth pockets, usher in efficiencies throughout project lifecycles and give them a competitive edge,” said Sumit Oberoi, Senior Industry Strategist, Asia Pacific at Autodesk. “Priority areas for construction businesses to improve digital adoption include starting small, selecting a digital champion, tracking a range of success measures, and asking whether your business is AI ready.” he added. ... David Rumbens, Partner at Deloitte Access Economics, said, “The Indian construction sector, fuelled by a surge in demand for affordable housing as well as supportive government policies to boost urban infrastructure, is poised to make a strong contribution as India’s economy grows by 6.9% over the next year


Recovering from CrowdStrike, Prepping for the Next Incident

In the future, organizations could consider whether outside factors make a potential software acquisition riskier, Sayers said. A product widely used by Fortune 100 companies, for example, has the added risk of being an attractive target to attackers hoping to hit many such victims in a single attack. “There is a soft underbelly in the global IT world, where you can have instances where a particular piece of software or a particular vendor is so heavily relied upon that they themselves could potentially become a target in the future,” Sayers said. Organizations also need to identify any single points of failure in their environments — instances where they rely on an IT solution whose disruption, whether deliberate or accidental, could disrupt their whole organization. When one is identified, they need to begin planning around the risks and looking for backup processes. Sayers noted that some types of resiliency measures may be too expensive for most organizations to adopt; some entities are already priced out of just backing up all their data and many would be unable to afford maintaining backup, alternate IT infrastructure to which they could roll over.


AI And Security: It Is Complicated But Doesn't Need To Be

While AI may present a potential risk for companies, it could also be part of the solution. As AI processes information differently from humans, it can look at issues differently and come up with breakthrough solutions. For example, AI produces better algorithms and can solve mathematical problems that humans have struggled with for many years. As such, when it comes to information security, algorithms are king and AI, Machine Learning (ML) or a similar cognitive computing technology, could come up with a way to secure data. This is a real benefit of AI as it can not only identify and sort massive amounts of information, but it can identify patterns allowing organisations to see things that they never noticed before. This brings a whole new element to information security. ... As these solutions will bring benefits to the workplace, companies may consider putting non-sensitive data into systems to limit exposure of internal data sets while driving efficiency across the organisation. However, organisations need to realise that they can’t have it both ways, and data they put into such systems will not remain private.



Quote for the day:

“When we give ourselves permission to fail, we, at the same time, give ourselves permission to excel.” -- Eloise Ristad