Daily Tech Digest - August 07, 2024

Should You Buy or Build an AI Solution?

Training an AI model is not cheap; ChatGPT cost $10 million to train in its current form, while the cost to develop the next generation of AI systems is expected to be closer to $1 billion. Traditional AI tends to cost less than generative AI because it runs on fewer GPUs, yet even the smallest scale of AI projects can quickly reach a $100,000 price tag. Building an AI model should only be done if it’s expected that you will recoup building costs within a reasonable time horizon. ... The right partner will help integrate new AI applications into the existing IT environment and, as mentioned, provide the talent required for maintenance. Choosing an existing model tends to be cheaper and faster than building a new one. Still, the partner or vendor must be vetted carefully. Vendors with an established history of developing AI will likely have better data governance frameworks in place. Ask them about policies and practices directly to see how transparent they are. Are they flexible enough to make said policies align with yours? Will they demonstrate proof of their compliance with your organization’s policies? The right partner will be prepared to offer data encryption, firewalls, and hosting facilities to ensure regulatory requirements are met, and to protect company data as if it were their own.


Business Data Privacy Standards and the Impact of Artificial Intelligence on Data Governance

Artificial intelligence technologies, including machine learning and natural language processing, have revolutionized how businesses analyze and utilize data. AI systems can process vast amounts of information at unprecedented speeds, uncovering patterns and generating insights that drive strategic decisions and operational efficiencies. However, the use of AI introduces complexities to data governance. Traditional data governance practices focused on managing structured data within defined schemas. AI, on the other hand, thrives on vast swaths of information and can generate entirely new data. ... As AI continues to evolve, so too must data governance frameworks. Future advancements in AI technologies, such as federated learning and differential privacy, hold promise for enhancing data privacy while preserving the utility of AI applications. Collaborative efforts between businesses, policymakers, and technology experts are essential to navigate these complexities and ensure that AI-driven innovation benefits society responsibly. 


Foundations of Forensic Data Analysis

Forensic data analysis faces a variety of technical, legal, and administrative challenges. Technical factors that affect forensic data analysis include encryption issues, need for large amounts of disk storage space for data collection and analysis, and anti-forensics methods. Legal challenges can arise in forensic data analysis and can confuse or derail an investigation, such as attribution issues stemming from a malicious program capable of executing malicious activities without the user’s knowledge. These applications can make it difficult to identify whether cybercrimes were deliberately committed by a user or if they were executed by malware. The complexities of cyber threats and attacks can create significant difficulties in accurately attributing malicious activity. Administratively, the main challenge facing data forensics involves accepted standards and management of data forensic practices. Although many accepted standards for data forensics exist, there is a lack of standardization across and within organizations. Currently, there is no regulatory body that overlooks data forensic professionals to ensure they are competent and qualified and are following accepted standards of practice.


Closing the DevSecOps Gap: A Blueprint for Success

Businesses need to start at the top and ensure all DevSecOps team members accept a continuous security focus: Security isn't a one-time event; it's an ongoing process. Leaders must encourage open communication between development, security, and operation teams, which can be achieved with regular meetings and shared communication platforms that facilitate constant collaboration. Developers must learn secure coding practices when building their models, while security and operations teams need to better understand development workflows to create practical security measures. Peer-to-peer communication and training are about partnership, not conflict, and effective DevSecOps thrives on collaboration, not finger-pointing. Only once these personnel changes are implemented can a DevSecOps team successfully execute a shift left security approach and leverage the benefits of technology automation and efficiency. Once internal harmony is achieved, DevSecOps teams can begin consolidating automation and efficiency into their workflows by integrating security testing tools within the CI/CD pipelines. 


How micro-credentials can impact the world of digital upskilling in a big way

Micro-credentials, when correctly implemented, can complement traditional degree programmes in a number of ways. Take for example the Advance Centre, in partnership with University College Dublin, Technological University Dublin and ATU Sligo, which offers accredited programmes and modules with the intent of addressing Ireland’s future digital skill needs. “They enable students to gain additional skills and knowledge that supplement their professional field. For example, a mechanical engineer might pursue a micro-credential in cybersecurity or data analytics to enhance their expertise and employability,” said O’Gorman. By bridging very specific skills gaps, micro-credentials can cover materials that may otherwise not be addressed in more traditional degree programmes. “This is particularly valuable in fast-evolving fields where specific up-to-date skills are in high demand.” Furthermore, it is fair to say that balancing work, education and your personal life is no easy feat, but this shouldn’t mean that you have to compromise on your career aspirations. 


Eedge Data Center Supports Emerging Trends

Adopting AI technologies requires a lot of computational power, storage space and low-latency networking to be able to train and run models. These technologies prefer hosting environments, which makes them highly compatible with data centres, therefore, as the demand for AI grows, so will the demand for data centres. However, the challenge remains on limiting new data centres to connect to the grid, which will impact data centre build out. This highlights edge data centres as the solution to the data centre capacity problem.  ... With this pressure, cloud computing has emerged as a cornerstone for these modernisation efforts, with companies choosing to move their workloads and applications onto the cloud. This shift has brought challenges for companies relating to them managing costs and ensuring data privacy. As a result, organisations are considering cloud repatriation as a strategic option. Cloud repatriation is essentially the migration of applications, data and workloads from the public cloud environment back to on-premises or a colocated centre infrastructure.


How To Get Rid of Technical Debt for Good

“To get rid of it or minimize it, you should treat this problem as a regular task -- systematically. All technical debt should be precisely defined and fixed with a maximum description of the current state and expected results after the problem is solved,” says Zaporozhets. “As the next step, [plan] the activities related to technical debt -- namely, who, when, and how should deal with these problems. And, of course, regular time should be allocated for this, which means that dealing with technical debt should become a regular activity, like attending daily meetings.” ... Regularly addressing technical debt requires discipline, motivation and systematic behavior from all team members. “When the team stops being afraid of technical debt and starts treating it as a regular task, the pressure will lessen, and there will be a sense of control,” says Zaporozhets, “It's important not to put technical debt on hold. I teach my teammates that each team member must remember to take a systematic approach to technical debt and take initiative. When the whole team works together on this, they will realize that technical debt is not so scary, and controlling the backlog will become a routine task.”


New Orleans CIO Kimberly LaGrue Discusses Cyber Resilience

Cities are engrossed in the business of delivering services to constituents. But appreciating that a cyber interruption could knock down a city makes everyone think about that differently. In our cyberattack, we had the support of the mayor, the chief administrative officer and the homeland security office. The problem was elevated to those levels, and we were grateful that they appreciated the importance of the challenges. The most integral part of a good resilience strategy for government, especially for city government, is for city leaders to pay attention to it and buy into the idea that these are real threats, and they must be addressed? ... We learned of cyberattacks across the state through Louisiana’s fusion center. They were very active, very vocal about other threats. We gained a lot of insights, a lot of information, and they were on the ground helping those agencies to recover. The state had almost 200 volunteers in its response arsenal, led by the Louisiana National Guard and the state of Louisiana’s fusion center. During our cyberattack, the group of volunteers that was helping other agencies came from those events straight to New Orleans for our event.


How cyber insurance shapes risk: Ascension and the limits of lessons learned

As research has supported, simple cost-benefit conditions among victims incentivize immediate payment to cyber criminals unless perfect mitigation with backups is possible and so long as the ransom is priced to correspond with victim budgets. Any delay incurs unnecessary costs to victims, their clients, and — cumulatively — to the insurer. The result is the rapid payment posture mentioned above. The singular character of cyber risk for these companies also sets limits on the lessons that can be learned for the average CISO working to safeguard organizations across the vast majority of America’s private enterprise. ... CISOs across the board should support firmer discussions with the federal government about increasingly strict and even punitive rules for limiting the payout of criminal fees. Limiting criminal incident payouts would remove the incentives for consistent high-tempo strikes on major infrastructure providers, which the federal government could compensate for in the near term by providing better resources for Sector Risk Management Agencies and beginning to resolve the abnormal dynamics surrounding the insurer-critical infrastructure relationship.


Transform, don't just change: Palladium India’s Neha Zutshi

The world of work is evolving rapidly, and HR is at the forefront of this transformation. One of the biggest challenges we face is managing change effectively, as poorly planned and communicated changes often meet resistance and fail. To navigate this, organisations must build the capability to manage change quickly and efficiently. This involves fostering an agile, learning culture where adaptability is valued, and employees are encouraged to embrace new ways of working. Upskilling and reskilling are critical in this process, ensuring that our workforce remains relevant and equipped to handle emerging challenges. ... Technology and AI are pervasive, permeating every industry, and HR is no exception. Various aspects of AI, such as machine learning and digital systems, have streamlined HR processes and automate mundane tasks. However, even though there are early adopter advantages, it is crucial to assess the need and risks related to adopting innovative HR technologies. Policy and ethical considerations must be addressed when adopting these technologies. Clear policies governing confidentiality, fairness, and accuracy are essential to ensure a smooth transition.



Quote for the day:

"Successful and unsuccessful people do not vary greatly in their abilities. They vary in their desires to reach their potential." -- John Maxwell

Daily Tech Digest - August 06, 2024

Why the Network Matters to Generative AI

Applications, today, are distributed. Our core research tells us more than half (60%) of organizations operate hybrid applications; that is, with components deployed in core, cloud, and edge locations. That makes the Internet their network, and the lifeline upon which they depend for speed and, ultimately, security. Furthermore, our focused research tells us that organizations are already multi-model, on average deploying 2.9 models. And where are those models going? Just over one-third (35%) are deploying in both public cloud and on-premises. Applications that use those models, of course, are being distributed in both environments. According to Red Hat, some of those models are being used to facilitate the modernization of legacy applications. ... One is likely tempted to ask why we need such a thing. The problem is we can’t affect the Internet. Not really. For all our attempts to use QoS to prioritize traffic and carefully select the right provider, who has all the right peering points, we can’t really do much about it. For one thing, over-the-Internet connectivity doesn’t typically reach into another environment, in which there are all kinds of network challenges like overlapping IP addresses, not to mention the difficulty in standardizing security policies and monitoring network activity.


Aware of what tech debt costs them, CIOs still can’t make it an IT priority

The trick for CIOs who have significant tech debt is to sell it to organization leadership, he says. One way to frame the need to address tech debt is to tie it to IT modernization. “You can’t modernize without addressing tech debt,” Saroff says. “Talk about digital transformation.” ... “You don’t just say, ‘We’ve got an old ERP system that is out of vendor support,’ because they’ll argue, ‘It still works; it’s worked fine for years,’” he says. “Instead, you have to say, ‘We need a new ERP system because you have this new customer intimacy program, and we’ll either have to spend millions of dollars doing weird integrations between multiple databases, or we could upgrade the ERP.’” ... “A lot of it gets into even modernization as you’re building new applications and new software,” he says. “Oftentimes, if you’re interfacing with older platforms that have sources of data that aren’t modernized, it can make those projects delayed or more complicated.” As organizational leaders push CIOs to launch AI projects, an overlooked area of tech debt is data management, adds Ricardo Madan, senior vice president for global technology services at IT consulting firm TEKsystems.


Is efficiency on your cloud architect’s radar?

Remember that we can certainly measure the efficiency of each of the architecture’s components, but that only tells you half of the story. A system may have anywhere from 10 to 1,000 components. Together, they create a converged architecture, which provides several advantages in measuring and ensuring efficiency. Converged architectures facilitate centralized management by combining computing, storage, and networking resources. ... With an integrated approach, converged architectures can dynamically distribute resources based on real-time demand. This reduces idle resources and enhances utilization, leading to better efficiency. Automation tools embedded within converged architectures help automate routine tasks such as scaling, provisioning, and load balancing. These tools can adjust resource allocation in real time, ensuring optimal performance without manual intervention. Advanced monitoring tools and analytics platforms built into converged architectures provide detailed insights into resource usage, cost patterns, and performance metrics. This enables continuous optimization and proactive management of cloud resources.


ITSM concerns when integrating new AI services

The key to establishing stringent access controls lies in feeding each LLM only the information that its users should consume. This approach eliminates the concept of a generalist LLM fed with all the company’s information, thereby ensuring that access to data is properly restricted and aligned with user roles and responsibilities. ... To maintain strict control over sensitive data while leveraging the benefits of AI, organizations should adopt a hybrid approach that combines AI-as-a-Service (AIaaS) with self-hosted models. For tasks involving confidential information, such as financial analysis and risk assessment, deploying self-hosted AI models ensures data security and control. Meanwhile, utilizing AIaaS providers like AWS for less sensitive tasks, such as predictive maintenance and routine IT support, allows organizations to benefit from the scalability and advanced features offered by cloud-based AI services. This hybrid strategy ensures that sensitive data remains secure within the organization’s infrastructure while taking advantage of the innovation and efficiency provided by AIaaS for other operations.


Fighting Back Against Multi-Staged Ransomware Attacks Crippling Businesses

Ransomware has evolved from lone wolf hackers operating from basements to complex organized crime syndicates that operate just like any other professional organization. Modern ransomware gangs employ engineers that develop the malware and platform; employ help desk staff to answer technical queries; employ analysts that identify target organizations; and ironically, employ PR pros for crisis management. The ransomware ecosystem also comprises multiple groups with specific roles. For example, one group (operators) builds and maintains the malware and rents out their infrastructure and expertise (a.k.a. ransomware-as-a-service). Initial access brokers specialize in breaking into organizations and selling the acquired access, data, and credentials. Ransomware affiliates execute the attack, compromise the victim, manage negotiations, and share a portion of their profits with the operators. Even state-sponsored attackers have joined the ransomware game due to its potential to cause wide-scale disruption and because it is very lucrative.


Optimizing Software Quality: Unit Testing and Automation

Any long-term project without proper test coverage is destined to be rewritten from scratch sooner or later. Unit testing is a must-have for the majority of projects, yet there are cases when one might omit this step. For example, you are creating a project for demonstrational purposes. The timeline is very tough. Your system is a combination of hardware and software, and at the beginning of the project, it's not entirely clear what the final product will look like. ... in automation testing the test cases are executed automatically. It happens much faster than manual testing and can be carried out even during nighttime as the whole process requires minimum human interference. This approach is an absolute game changer when you need to get quick feedback. However, as with any automation, it may need substantial time and financial resources during the initial setup stage. Even so, it is totally worth using it, as it will make the whole process more efficient and the code more reliable. The first step here is to understand if the project incorporates test automation. You need to ensure that the project has a robust test automation framework in place. 


In the age of gen AI upskilling, learn and let learn

Gen AI upskilling is neither a one-off endeavor nor a quick fix. The technology’s sophistication and ongoing evolution requires dedicated educational pathways powered by continuous learning opportunities and financial support. So, as leaders, we need to provide resources for employees to participate in learning opportunities (that is, workshops), attend third-party courses offered by groups like LinkedIn, or receive tuition reimbursements for upskilling opportunities found independently. We must also ensure that these resources are accessible to our entire employee base, regardless of the nature of an employee’s day-to-day role. From there, you can institutionalize mechanisms for documenting and sharing learnings. This includes building and popularizing communication avenues that motivate employees to share feedback, learn together and surface potential roadblocks. Encouraging a healthy dialogue around learning, and contributing to these conversations yourself, often leads to greater innovation across your organization. At my company, we tend to blend the learning and sharing together. 


Embracing Technology: Lessons Business Leaders Can Learn from Sports Organizations

To maintain their competitive edge, sports organizations are undertaking comprehensive digital transformations. Digital technologies are integrated across all facets of operations, transforming people, processes, and technology. Data analytics guide decisions in areas such as player recruitment, game strategies, and marketing efforts.  ... The convergence of sports and technology reveals new business opportunities. Sponsorships from technology companies showcase their capabilities to targeted audiences and open up new markets. Innovations in sports technology, such as advanced training equipment and analytical tools, are driving unprecedented possibilities. By embracing these insights, business leaders can unlock new avenues for growth and innovation in their own industries. Partnering with technology firms can lead to the development of new products, services, and market opportunities, ensuring sustained success and relevance in an ever-evolving business landscape.


Containerization Can Render Apps More Agile Painlessly

Application development and deployment methods will change because the app developer no longer has to think about the integration of an app with an underlying operating system and associated infrastructure. This is because the container already has the correct configuration of all these elements. If an app developer wants to immediately install his app in both Linux and Windows environments, he can do it. ... Most IT staff have found that they need specialized tools for container management, and that they can’t use the tools that they are accustomed to. Companies like Kubernetes, Dynatrace, and Docker all provide container management tools, but mastering these tools requires IT staff to be trained on the tools. Security and governance also present challenges in the container environment because each container uses its own operating system kernel. If an OS security vulnerability is discovered, the OS kernel images across all containers must be synchronously fixed with a security patch to resolve the vulnerability. In cases like this, it’s ideal to have a means of automating the fix process, but it might be necessary to do it manually at first.


Can AI even be open source? It's complicated

Clearly, we need to devise an open-source definition that fits AI programs to stop these faux-source efforts in their tracks. Unfortunately, that's easier said than done. While people constantly fuss over the finer details of what's open-source code and what isn't, the Open Source Initiative (OSI) has nailed down the definition, the Open Source Definition (OSD), for almost twenty years. The convergence of open source and AI is much more complicated. In fact, Joseph Jacks, founder of the Venture Capitalist (VC) business FOSS Capital, argued there is "no such thing as open-source AI" since "open source was invented explicitly for software source code." It's true. In addition, open-source's legal foundation is copyright law. As Jacks observed, "Neural Net Weights (NNWs) [which are essential in AI] are not software source code -- they are unreadable by humans, nor are they debuggable." As Stefano Maffulli, OSI executive director, has told me, software and data are mixed in AI, and existing open-source licenses are breaking down. Specifically, trouble emerges when all that data and code are merged in AI/ML artifacts -- such as datasets, models, and weights.



Quote for the day:

"Leadership does not depend on being right." -- Ivan Illich

Daily Tech Digest - August 05, 2024

Faceoff: Auditable AI Versus the AI Blackbox Problem

“The notion of auditable AI extends beyond the principles of responsible AI, which focuses on making AI systems robust, explainable, ethical, and efficient. While these principles are essential, auditable AI goes a step further by providing the necessary documentation and records to facilitate regulatory reviews and build confidence among stakeholders, including customers, partners, and the general public,” says Adnan Masood ... “There are two sides of auditing: the training data side, and the output side. The training data side includes where the data came from, the rights to use it, the outcomes, and whether the results can be traced back to show reasoning and correctness,” says Kevin Marcus. “The output side is trickier. Some algorithms, such as neural networks, are not explainable, and it is difficult to determine why a result is being produced. Other algorithms such as tree structures enable very clear traceability to show how a result is being produced,” Marcus adds. ... Developing explainable AI remains the holy grail and many an AI team is on a quest to find it. Until then, several efforts are underway to develop various ways to audit AI in order to have a stronger grip over its behavior and performance. 


A developer’s guide to the headless data architecture

We call it a “headless” data architecture because of its similarity to a “headless server,” where you have to use your own monitor and keyboard to log in. If you want to process or query your data in a headless data architecture, you will have to bring your own processing or querying “head” and plug it into the data — for example, Trino, Presto, Apache Flink, or Apache Spark. A headless data architecture can encompass multiple data formats, with data streams and tables as the two most common. Streams provide low-latency access to incremental data, while tables provide efficient bulk-query capabilities. Together, they give you the flexibility to choose the format that is most suitable for your use cases, whether it’s operational, analytical, or somewhere in between. ... Many businesses today are building their own headless data architectures, even if they’re not quite calling it that yet, though using cloud services tends to be the easiest and most popular way to get started. If you’re building your own headless data architecture, it’s important to first create well-organized and schematized data streams, before populating them into Apache Iceberg tables.


The Hidden Costs of the Cloud Skills Gap

Properly managing and scaling cloud resources requires expertise in load balancing, auto-scaling, and cost optimization. Without these skills, companies may face inefficiencies, either by over-provisioning or under-utilizing resources. Inexperienced or overstretched staff might struggle with performance optimization, resulting in slower applications and services, which can negatively impact user satisfaction and harm the company's reputation. ... Employees lacking the necessary skills to fully leverage cloud technologies may be less likely to propose innovative solutions or improvements, potentially leading to a lack of new product development and stagnation in business growth. The cloud presents abundant opportunities for innovation, including AI, machine learning, and advanced data analytics. Companies without the expertise to implement these technologies risk missing out on significant competitive advantages and exciting new discoveries. The bottom line is that skilled professionals often drive the adoption of new technologies because they have the knowledge to experiment in the field.


Architectural Retrospectives: The Key to Getting Better at Architecting

The traditional architectural review, especially if conducted by outside parties, often turns into a blame-assignment exercise. The whole point of regular architectural reviews in the MVA approach is to learn from experience so that catastrophic failures never occur. ... The mechanics of running an architectural retrospective session are identical to those of running a Sprint Retrospective in Scrum. In fact, an architectural focus can be added to a more general-purpose retrospective to avoid creating yet another meeting, so long as all the participants are involved in making architectural decisions. This can also be an opportunity to demonstrate that anyone can make an architectural decision, not only the "architects." ... Many teams skip retrospectives because they don’t like to confront their shortcomings, Architectural retrospectives are even more challenging because they examine not just the way the team works, but the way the team makes decisions. But architectural retros have great pay-offs: they can uncover unspoken assumptions and hidden biases that prevent the team from making better decisions. If you retrospect on the way that you create your architecture, you will get better at architecting.


Design flaw has Microsoft Authenticator overwriting MFA accounts, locking users out

Microsoft confirmed the issue but said it was a feature not a bug, and that it was the fault of users or companies that use the app for authentication. Microsoft issued two written statements to CSO Online but declined an interview. Its first statement read: “We can confirm that our authenticator app is functioning as intended. When users scan a QR code, they will receive a message prompt that asks for confirmation before proceeding with any action that might overwrite their account settings. This ensures that users are fully aware of the changes they are making.” One problem with that first statement is that it does not correctly reflect what the message says. The message says: “This action will overwrite existing security information for your account. To prevent being locked out of your account, continue only if you initiated this action from a trusted source.” The first sentence of the warning window is correct, in that the action will indeed overwrite the account. But the second sentence incorrectly tells the user to proceed as long as two conditions are met: that the user initiated the action; and that it is a trusted source.


Automation Resilience: The Hidden Lesson of the CrowdStrike Debacle

Automated updates are nothing new, of course. Antivirus software has included such automation since the early days of the Web, and our computers are all safer for it. Today, such updates are commonplace – on computers, handheld devices, and in the cloud. Such automations, however, aren’t intelligent. They generally perform basic checks to ensure that they apply the update correctly. But they don’t check to see if the update performs properly after deployment, and they certainly have no way of rolling back a problematic update. If the CrowdStrike automated update process had checked to see if the update worked properly and rolled it back once it had discovered the problem, then we wouldn’t be where we are today. ... The good news: there is a technology that has been getting a lot of press recently that just might fit the bill: intelligent agents. Intelligent agents are AI-driven programs that work and learn autonomously, doing their good deeds independently of other software in their environment. As with other AI applications, intelligent agents learn as they go. Humans establish success and failure conditions for the agents and then feed back their results into their models so that they learn how to achieve successes and avoid failures.


Is HIPAA enough to protect patient privacy in the digital era?

HIPAA requires covered entities to establish strong data privacy policies, but it doesn’t regulate cybersecurity standards. HIPAA was deliberately designed to be tech agnostic, on the basis that this would keep it relevant despite frequent technology changes. But this could be a glaring omission. For example, Change Healthcare, a medical insurance claims clearinghouse, experienced a data breach when a hacker used stolen credentials to enter the network. If Change had implemented multi-factor authentication (MFA), a basic cybersecurity measure, the breach might not have taken place. But MFA isn’t specified in the HIPAA Security Rule, which was passed 20 years ago. Cybersecurity in the healthcare industry falls through the cracks of other regulations. The CISA update in early 2024 requires companies in critical infrastructure industries to report cyber incidents within 72 hours of discovery. ... “Crucially, there are many third-parties in the healthcare ecosystem that our members contract with who would not be considered ‘covered entities’ under this proposal, and therefore, would not be obligated to share or disclose that there had been a substantial cyber incident – or any cyber incident at all,” warns Russell Branzell, president and CEO of CHIME.


The downtime dilemma: Why organizations hesitate to switch IT infrastructure providers

Making a switch is not always an easy decision. So, how can a business be sure it’s doing the right thing? There are four boxes that a business should look for its IT infrastructure provider to tick before contemplating a move. Firstly, is the provider there when needed? Reliable round the clock customer support is crucial for addressing any issues that arise before, during, and after a switch. For businesses with small IT departments or limited resources, this external support offers reliable infrastructure management without needing an extensive in-house team. Next, does the provider offer high uptime guarantees and Service Level Agreements (SLAs) outlining compensation for downtime? By prioritizing service providers with Uptime Institute’s tier 4 classification, businesses are opting for a partner that’s certified as fully fault-tolerant, highly resilient, and guaranteeing an uptime of 99.9 percent. This protects the business’ crucial IT systems, keeping them operational despite disruptive activity such as a cyberattack, failing components, or unexpected outages. 


Inside CIOs’ response to the CrowdStrike outage — and the lessons they learned

The first thing Alli did was gather the incident response team to assess the situation and establish the company’s immediate response plan. “We had to ensure that we could maintain business continuity while we addressed the implications of the outage,’’ Alli says. Communication was vital and Alli kept leadership and stakeholders informed about the situation and the steps IT was taking with regular updates. “It’s easy to panic in these situations, but we focused on being transparent and calm, which helped to keep the team grounded,’’ Alli says. Additionally, “The lack of access to critical security insights put us at risk temporarily, but more importantly, it highlighted vulnerabilities in our overall security posture. We had to quickly shift some of our security protocols and rely on other measures, which was a reminder of the importance of having a robust backup plan and redundancies in place,’’ Alli says. Mainiero agrees, saying that in this type of situation, “you have to take on a persona — if you’re panicked, your teams are going to panic.” He says that training has taught him never to raise his voice.


SASE: This Time It’s Personal

Working patterns are changing fast. Millennials and GenZs – the first true digital generation – no longer expect to go to the same place every day. Just as the web broke the link between bricks and mortar and shopping, we are now seeing the disintermediation of the workplace, which is anywhere and everywhere. The trend was accelerated by the pandemic, but it’s a mistake to believe that the pandemic created hybrid working. So, while SASE makes the right assumptions about the need to integrate networking and security, it doesn't go far enough. The networking and security stack is still office-bound and centralized. If you were designing this from the ground up, you wouldn't start from here. A more radical approach, what we call personal SASE, is to left-shift the networking and security stack all the way to the user edge. Think of it like the transition from the mainframe to the minicomputer to the PC in the early 1980s, a rapid migration of compute power to the end user. Personal SASE involves a similar architectural shift with commensurate productivity gains for the modern hybrid workforce, who expect but rarely get the same level of network performance and seamless security that they currently experience when they step into the office.



Quote for the day:

"If you really want the key to success, start by doing the opposite of what everyone else is doing." -- Brad Szollose

Daily Tech Digest - August 04, 2024

Are we prepared for ‘Act 2’ of gen AI?

It’s both logical and tempting to design your AI usage around one large model. You might think you can simply take a giant large language model (LLM) from your Act 1 initiatives and just get moving. However, the better approach is to assemble and integrate a mixture of several models. Just as a human’s frontal cortex handles logic and reasoning while the limbic system deals with fast, spontaneous responses, a good AI system brings together multiple models in a heterogeneous architecture. No two LLMs are alike — and no single model can “do it all.” What’s more, there are cost considerations. The most accurate model might be more expensive and slower. For instance, a faster model might produce a concise answer in one second — something ideal for a chatbot. ... Even in its early days, gen AI quickly presented scenarios and demonstrations that underscore the critical importance of standards and practices that emphasize ethics and responsible use. Gen AI should take a people-centric approach that prioritizes education and integrity by detecting and preventing harmful or inappropriate content — in both user input and model output. For example, invisible watermarks can help reduce the spread of disinformation.


Supercharge AIOps Efficiency With LLMs

One of the superpowers LLMs bring to the table is ultra-efficient summarization. Given a dense information block, generative AI models can extract the main points and actionable insights. Like with our earlier trials in algorithmic root cause analysis, we gathered all the data we could surrounding an observed issue, converted it into text-based prompts, and fed it to an LLM along with guidance on how it should summarize and prioritize the data. Then, the LLM was able to leverage its broad training and newfound context to summarize the issues and hypothesize about root causes. Constricting the scope of the prompt by providing the LLM the information and context it needs — and nothing more — we were able to prevent hallucinations and extract valuable insights from the model. ... Another potential application of LLMs is automatically generating post-mortem reports after incidents. Documenting issues and resolutions is not only a best practice but also sometimes a compliance requirement. Rather than scheduling multiple meetings with different SREs, Developers, and DevOps to collect information, could LLMs extract the necessary information from the Senser platform and generate reports automatically?


“AI toothbrushes” are coming for your teeth—and your data

So-called "AI toothbrushes" have become more common since debuting in 2017. Numerous brands now market AI capabilities for toothbrushes with three-figure price tags. But there's limited scientific evidence that AI algorithms help oral health, and companies are becoming more interested in using tech-laden toothbrushes to source user data. ... Tech-enabled toothbrushes bring privacy concerns to a product that has historically had zero privacy implications. But with AI toothbrushes, users are suddenly subject to a company's privacy policy around data and are also potentially contributing to a corporation's marketing, R&D, and/or sales tactics. Privacy policies from toothbrush brands Colgate-Palmolive, Oral-B, Oclean, and Philips all say the companies' apps may gather personal data, which may be used for advertising and could be shared with third parties, including ad tech companies and others that may also use the data for advertising. These companies' policies say users can opt out of sharing data with third parties or targeted advertising, but it's likely that many users overlook the importance of reading privacy policies for a toothbrush.


4 Strategies for Banks and Their Tech Partners that Save Money and Angst

When it comes to technological alignment between banks and tech partners, it’s about more than ensuring tech stacks are compatible. Cultural alignment on work styles, development cycles and more go into making things work. Both partners should be up front about their expectations. For example, banking institutions have more regulatory and administrative hurdles to jump through than technology companies. While veteran fintech companies will be aware and prepared to move in a more conservative way, early-stage technology companies may be quicker to move and work in more unconventional ways. Prioritization of projects on both ends should always be noted in order to set realistic expectations. For example, tech firms typically have a large pipeline of onboarding ahead. And the financial institution typically has limited tech resources to allocate towards project management. ... Finally, when tech firms and financial institutions work together, a strong dose of reality helps. View upfront costs as a foundation for future returns. Community banking and credit union leaders should focus on the potential benefits and value generation expected three to five years after the project begins.


US Army moves closer to fielding next-gen biometrics collection

Specifically designed to be the Army’s forward biometrics collection and matching system, NXGBCC has been designed to support access control, identify persons of interest, and to provide biometric identities to detainee and intelligence systems. NXGBCC collects, matches, and stores biometric identities and is comprised of three components: a mobile collection kit, static collection kit, and a local trusted source. ... The Army said “NXGBCC will add to the number of biometric modalities collected, provide matches to the warfighter in less than three minutes, increase the data sharing capability, and reduce weight, power, and cost.” NXGBCC will use a Local Trusted Source that is composed of a distributed database that’s capable of being used worldwide, data management software, forward biometric matching software, and an analysis portal. Also, NXGBCC collection kit(s) will be composed of one or more collection devices, a credential/badge device, and document scanning device. The NXGBCC system employs an integrated system of commercial-off-the-shelf hardware and software that is intended to ensure the end-to-end data flow that’s required to support different technical landscapes during multiple types of operational missions.


The Future of AI: Edge Computing and Qualcomm’s Vision

AI models are becoming increasingly powerful while also getting smaller and more efficient. This advancement enables them to run on edge devices without compromising performance. For instance, Qualcomm’s latest chips are designed to handle large language models and other AI tasks efficiently. These chips are not only powerful but also energy-efficient, making them ideal for mobile devices. One notable example is the Galaxy S24 Ultra, which is equipped with Qualcomm’s Snapdragon 8 Gen 3 chip. This device can perform various AI tasks locally, from live translation of phone calls to AI-assisted photography. Features like live translation and chat assistance, which include tone adjustment, spell check, and translation, run directly on the device, showcasing the potential of edge computing. ... The AI community is also contributing to this trend by developing open-source models that are smaller yet powerful. Innovations like the Mixture of Agents, which allows multiple small AI agents to collaborate on tasks, and Route LLM, which orchestrates which model should handle specific tasks, are making AI more efficient and accessible. 


Software Supply Chain Security: Are You Importing Problems?

In a sense, Software Supply Chain as a strategy, just like Zero Trust, cannot be bought off-the-shelf. It requires a combination of careful planning, changing the business processes, improving communications with your suppliers and customers and, of course, a substantial change in regulations. We are already seeing the first laws introducing stronger punishment for organizations involved in critical infrastructure, with their management facing jail time for heavy violations. Well, perhaps the very definition of “critical” must be revised to include operating systems, public cloud infrastructures, and cybersecurity platforms, considering the potential global impact of these tools on our society.  ... To his practical advice I can only add another bit of philosophical musing: security is impossible without trust, but too much trust is even more dangerous than too little security. Start utilizing the Zero Trust approach for every relationship with a supplier. This can be understood in various ways: from not taking any marketing claim at its face value and always seeking a neutral 3rd party opinion to very strict and formal measures like requiring a high Evaluation Assurance Level of the Common Criteria (ISO 15408) for each IT service or product you deploy.


A CISO’s Observations on Today’s Rapidly Evolving Cybersecurity Landscape

Simply being aware of risks isn’t sufficient. But, role-relevant security simulations will empower the entire workforce to know what to do and how to act when they encounter malicious activity. ... Security should be a smooth process, but it is often complicated. Recall the surge in phishing attacks: employees know not to click dubious links from unknown senders, but do they know how to verify if a link is safe or unsafe beyond their gut instinct? Is the employee aware that there is an official email verification tool? Do they even know how to use it? ... It is not uncommon for business leaders to rush technology adoption, delaying security until later as an added feature bolted on afterward. When companies prioritize speed and scalability at the expense of security, data becomes more mobile and susceptible to attack, making it more difficult for security teams to ascertain the natural limitation of a blast radius. Businesses may also end up in security debt. ... Technology continues to evolve at breakneck speed, and organizations must adapt their security strategy appropriately. As such, businesses should adopt a multifaceted, agile, and ever-evolving cybersecurity approach to managing risks.


Future AI Progress Might Not Be Linear. Policymakers Should Be Prepared.

Policymakers and their advisors can act today to address that risk. Firstly, though it might be politically tempting, they should be mindful of overstating the likely progress and impact of current AI paradigms and systems. Linear extrapolations and quickfire predictions make for effective short-term political communication, but they carry substantial risk: If the next generation of language models is, in fact, not all that useful for bioterrorism; if they are not readily adopted to make discriminatory institutional decisions; or if LLM agents do not arrive in a few years, but we reach slowing progress or a momentary plateau instead, policymakers and the public will take note – and be skeptical of warnings in the future. If nonlinear progress is a realistic option, then policy advocacy on AI should proactively consider it: hedge on future predictions, conscientiously name the possibility of plateaus, and adjust policy proposals accordingly. Secondly, the prospect of plateaus makes reactive and narrow policy-making much more difficult. Their risk is instead best addressed by focusing on building up capacity: equip regulators and enforcement with the expertise, access and tools they need to monitor the state of the field.


Building the data center of the future: Five considerations for IT leaders

Disparate centers of data are, in turn, attracting more data, leading to Data Gravity. Localization needs and a Hybrid IT infrastructure are creating problems related to data interconnection. Complex systems require an abstraction layer to move data around to fulfill fast-changing computing needs. IT needs interconnection between workflow participants, applications, multiple clouds, and ecosystems, all from a single interface, without getting bogged down by the complexity wall. ... Increasing global decarbonization requirements means data centers must address ‌energy consumption caused by high-density computing. ... Global variations in data handling and privacy legislation require that data remain restricted to specific geographical regions. Such laws aren't the only drivers for data localization. The increasing use of AI at the edge, the source of the data, is driving demand for low-latency operations, which in turn requires localized data storage and processing. Concerns about proprietary algorithms being stored in the public cloud are also leading companies to move to a Hybrid IT infrastructure that can harness the best of all worlds.



Quote for the day:

"Perseverance is failing 19 times and succeding the 20th." -- Julie Andrews

Daily Tech Digest - Aug 03, 2024

Solving the tech debt problem while staying competitive and secure

Technical debt often stems from the costs of running and maintaining legacy technology services, especially older applications. It typically arises when organizations make short-term sacrifices or use quick fixes to address immediate needs without ever returning to resolve those temporary solutions. For CIOs, balancing technical debt with other strategic priorities is a constant challenge. They must decide whether to invest resources in high-profile areas like AI and security or to prioritize reducing technical debt. ... CIOs should invest in robust cybersecurity measures, including advanced threat detection, response capabilities, and employee training. Maintaining software updates and implementing multifactor authentication (MFA) and encryption will further strengthen an organization’s defenses. However, technical debt can significantly undermine these cybersecurity efforts. Legacy systems and outdated software can have vulnerabilities waiting to be exploited. Additionally, technical debt is often represented by multiple, disparate tools acquired over time, which can hinder the implementation of a cohesive security strategy and increase cybersecurity risk.


How to Create a Data-Driven Culture for Your Business

With businesses collecting more data than ever, for data analysts it can be more like scrounging through the bins than panning for gold. “Hiring data scientists is outside the reach of most organizations but that doesn't mean you can’t use the expertise of an AI agent,” Callens says. Once a business has a handle on which metrics really matter, the rest falls into place, organizations can define objectives and then optimize data sources. As the quality of the data improves the decisions are better informed and the outcomes can be monitored more effectively. Rather than each decision acting in isolation it becomes a positive feedback loop where data and decisions are inextricably linked: At that point the organization is truly data driven. Subramanian explains that changing the culture to become more data-driven requires top-down focus. When making decisions stakeholders should be asked to provide data justification for their choices and managers should be asked to track and report on data metrics in their organizations. “Have you established tracking of historical data metrics and some trend analysis?” she says. “Prioritizing data in decision making will help drive a more data-driven culture.”


How Prompt Engineering Can Support Successful AI Projects

Central to the technology is the concept of foundation models, which are rapidly broadening the functionality of AI. While earlier AI platforms were trained on specific data sets to produce a focused but limited output, the new approach throws the doors wide open. In simple — and somewhat unsettling — terms, a foundation model can learn new tricks from unrelated data. “What makes these new systems foundation models is that they, as the name suggests, can be the foundation for many applications of the AI model,” says IBM. “Using self-supervised learning and transfer learning, the model can apply information it’s learnt about one situation to another.” Given the massive amounts of data fed into AI models, it isn’t surprising that they need guidance to produce usable output. ... AI models benefit from clear parameters. One of the most basic is length. OpenAI offers some advice: “The targeted output length can be specified in terms of the count of words, sentences, paragraphs, bullet points, etc. Note however that instructing the model to generate a specific number of words does not work with high precision. The model can more reliably generate outputs with a specific number of paragraphs or bullet points.”


Effective Strategies To Strengthen Your API Security

To secure your organisation, you have to figure out where your APIs are, who’s using them and how they are being accessed. This information is important as API deployment increases your organisation’s attack surface making it more vulnerable to threats. The more exposed they are, the greater the chance a sneaky attacker might find a vulnerable spot in your system. Once you’ve pinpointed your APIs and have full visibility of potential points of access, you can start to include them in your vulnerability management processes. By proactively identifying vulnerabilities, you can take immediate action against potential threats. Skipping this step is like leaving the front door wide open. APIs give businesses the power to automate the process and boost operational efficiency. But here’s the thing: with great convenience comes potential vulnerabilities that malicious actors could exploit. If your APIs are internet-facing, then it’s important to put in place rate-limiting to control requests and enforce authentication for every API interaction. This helps take the guesswork out of who gets access to what data through your APIs. Another key measure is using the cryptographic signing of requests.


The Time is Now for Network-as-a-Service (NaaS)

As the world’s networking infrastructure has evolved, there is now far more private backbone bandwidth available. Like all cloud solutions, NaaS also benefits from significant ongoing price/performance improvements in commercial hardware. Combined with the growing number of carrier-neutral colocation facilities, NaaS providers simply have many more building blocks to assemble reliable, affordable, any-to-any connectivity for practically any location. The biggest changes derive from the advanced networking and security approaches that today’s NaaS solutions employ. Modern NaaS solutions fully disaggregate control and data planes, hosting control functions in the cloud. As a result, they benefit from practically unlimited (and inexpensive) cloud computing capacity to keep costs low, even as they maintain privacy and guaranteed performance. Even more importantly, the most sophisticated NaaS providers use novel metadata-based routing techniques and maintain end-to-end encryption. These providers have no visibility into enterprise traffic; all encryption/decryption happens only under the business’ direct control.


Criticality in Data Stream Processing and a Few Effective Approaches

With the advancement of stream processing engines like Apache Flink, Spark, etc., we can aggregate and process data streams in real time, as they handle low-latency data ingestion while supporting fault tolerance and data processing at scale. Finally, we can ingest the processed data into streaming databases like Apache Druid, RisingWave, and Apache Pinot for querying and analysis. Additionally, we can integrate visualization tools like Grafana, Superset, etc., for dashboards, graphs, and more. This is the overall high-level data streaming processing life cycle to derive business value and enhance decision-making capabilities from streams of data. Even with its strength and speed, stream processing has drawbacks of its own. A couple of them from a bird's eye view are confirming data consistency, scalability, maintaining fault-tolerance, managing event ordering, etc. Even though we have event/data stream ingestion frameworks like Kafka, processing engines like Spark, Flink, etc, and streaming databases like Druid, RisingWave, etc., we encounter a few other challenges if we drill down more


Understanding the Impact of AI on Cloud Spending and How to Harness AI for Enhanced Cloud Efficiency

The real magic happens when AI unlocks advanced capabilities in cloud services. By crunching real-time data, AI transforms how businesses operate, making them more agile and strategic in their approaches. Businesses can gain better scalability, run operations more efficiently, and make smarter, data-driven decisions – all thanks to AI. One of the biggest advantages of AI in the cloud is how it helps companies scale up smoothly. By using AI-driven solutions, businesses can predict future demands and optimise resource allocation accordingly. This means they can handle increased workloads without massive infrastructure overhauls, which is crucial for staying nimble and competitive. Scaling AI in cloud computing isn’t without its challenges, though. It requires strategic approaches like getting leadership buy-in, establishing clear ROI metrics, and using responsible AI algorithms. These steps ensure that AI integration not only scales operations but also does so efficiently and with minimal disruption. AI algorithms continuously monitor workload patterns and can make recommendations on adjusting resource allocations accordingly.


Blockchain Technology and Modern Banking Systems

“Zumo's innovative approach to integrating digital assets into traditional banking systems leverages APIs to simplify the process.” As Nick Jones explains, its Crypto Invest solution offers a digital asset custody and exchange service that can be seamlessly incorporated into a bank's existing IT infrastructure. “This provides consumer-facing retail banks with a compliance-focused route to offer their customers the option to invest in digital assets,” says Nick. By doing so, banks can generate new revenue streams, enabling customers to buy, hold and sell crypto within the familiar confines of their own banking platform. Recognising the regulatory and operational challenges faced by banks, Nick Jones believes in developing a sustainable and long-term approach, with a focus on delivering the necessary infrastructure. For banks to confidently integrate digital asset propositions into their business models, they must address the financial, operational and environmental sustainability of the project. Similarly, Kurt Wuckert highlights the feasibility of a hybrid approach for banks, where blockchain solutions are introduced gradually alongside existing systems. 


The transformation fallacy

Describing the migration process so far, Jordaan says that they started with some of the very critical systems. “One of which was the e-commerce system that runs 50 percent of our revenue,” he says. “That was significant, and provided scalability, because we could add more countries into it, and there are events such as airlines that cancel flights and so our customers would suddenly be looking for bookings.” After that, it was a long-running program of lifting and shifting workloads depending on their priority. The remaining data centers are either “just really complicated” to decommission, or are in the process of being shut down. By the end of next year, Jordaan expects TUI to have just one or two data centers. One of the more unique areas of TUI’s business from an IT perspective is that of the cruise ships. “Cruise ships actually have a whole data center on board,” Jordaan says. “It has completely separate networks for the onboard systems, navigation systems, and everything else, because you're in the middle of the sea. You need all the compute, storage, and networks to run from a data center.” These systems are being transformed, too. Ships are deploying satellite connectivity to bring greater Internet connectivity on board. 


AI and Design Thinking: The Dynamic Duo of Product Development

When designing products that incorporate generative AI, it may feel that you are tipping in the direction of being too technology-focused. You might be tempted to forego human intuition in order to develop products that embrace AI’s innovation. Or, you may have a more difficult time discerning what is meant to be human and what is meant to be purely technical, because AI is such a new and dynamic field that changes almost weekly. The human/machine duality is precisely why combining human-centric Design Thinking with the power of Generative AI is so effective for product development. Design Thinking isn’t merely a method; it’s a mindset focusing on user needs, iterative learning, and cross-functional teamwork—all of which are essential for pioneering AI-driven products. ... One might say that focusing on a solution to a problem, instead of the problem itself, is quite an empathetic way to approach a problem. Empathy, a cornerstone of Design Thinking, allows developers to understand their users deeply. ... While AI is a powerful tool, it’s crucial to maintain ethical standards and monitor for biases. Generative AI should not be considered a replacement for human ethics and critical thinking. Instead, use it as a collaborative component for enhancing creativity and efficiency.



Quote for the day:

"The litmus test for our success as Leaders is not how many people we are leading, but how many we are transforming into leaders" -- Kayode Fayemi

Daily Tech Digest - August 02, 2024

Small language models and open source are transforming AI

From an enterprise perspective, the advantages of embracing SLMs are multifaceted. These models allow businesses to scale their AI deployments cost-effectively, an essential consideration for startups and midsize enterprises that need to maximize their technology investments. Enhanced agility becomes a tangible benefit as shorter deployment times and easier customization align AI capabilities more closely with evolving business needs. Data privacy and sovereignty (perennial concerns in the enterprise world) are better addressed with SLMs hosted on-premises or within private clouds. This approach satisfies regulatory and compliance requirements while maintaining robust security. Additionally, the reduced energy consumption of SLMs supports corporate sustainability initiatives. That’s still important, right? The pivot to smaller language models, bolstered by open source innovation, reshapes how enterprises approach AI. By mitigating the cost and complexity of large generative AI systems, SLMs offer a viable, efficient, and customizable path forward. This shift enhances the business value of AI investments and supports sustainable and scalable growth. 


The Impact and Future of AI in Financial Services

Winston noted that AI systems require vast amounts of data, which raises concerns about data privacy and security. “Financial institutions must ensure compliance with regulations such as GDPR [General Data Protection Regulation] and CCPA [California Consumer Privacy Act] while safeguarding sensitive customer information,” he explained. Simply using general GenAI tools as a quick fix isn’t enough. “Financial services will need a solution built specifically for the industry and leverages deep data related to how the entire industry works,” said Kevin Green, COO of Hapax, a banking AI platform. “It’s easy for general GenAI tools to identify what changes are made to regulations, but if it does not understand how those changes impact an institution, it’s simply just an alert.” According to Green, the next wave of GenAI technologies should go beyond mere alerts; they must explain how regulatory changes affect institutions and outline actionable steps. As AI technology evolves, several emerging technologies could significantly transform the financial services industry. Ludwig pointed out that quantum computers, which can solve complex problems much faster than traditional computers, might revolutionize risk management, portfolio optimization, and fraud detection. 


Is Your Data AI-Ready?

Without proper data contextualization, AI systems may make incorrect assumptions or draw erroneous conclusions, undermining the reliability and value of the insights they generate. To avoid such pitfalls, focus on categorizing and classifying your data with the necessary metadata, such as timestamps, location information, document classification, and other relevant contextual details. This will enable your AI to properly understand the context of the data and generate meaningful, actionable insights. Additionally, integrating complementary data can significantly enhance the information’s value, depth, and usefulness for your AI systems to analyze. ... Although older data may be necessary for compliance or historical purposes, it may not be relevant or useful for your AI initiatives. Outdated information can burden your storage systems and compromise the validity of the AI-generated insights. Imagine an AI system analyzing a decade-old market report to inform critical business decisions—the insights would likely be outdated and misleading. That’s why establishing and implementing robust retention and archiving policies as part of your information life cycle management is critical. 


Generative AI: Good Or Bad News For Software

There are plenty of examples of breaches that started thanks to someone copying over code and not checking it thoroughly. Think back to the Heartbleed exploit, a security bug in a popular library that led to the exposure of hundreds of thousands of websites, servers and other devices which used the code. Because the library was so widely used, the thought was, of course, someone had checked it for vulnerabilities. But instead, the vulnerability persisted for years, quietly used by attackers to exploit vulnerable systems. This is the darker side to ChatGPT; attackers also have access to the tool. While OpenAI has built some safeguards to prevent it from answering questions regarding problematic subjects like code injection, the CyberArk Labs team has already uncovered some ways in which the tool could be used for malicious reasons. Breaches have occurred due to blindly incorporating code without thorough verification. Attackers can exploit ChatGPT, using its capabilities to create polymorphic malware or produce malicious code more rapidly. Even with safeguards, developers must exercise caution. ChatGPT generates the code, but developers are accountable for it


FinOps Can Turn IT Cost Centers Into a Value Driver

Once FinOps has been successfully implemented within an organization, teams can begin to automate the practice while building a culture of continuous improvement. Leaders can now better forecast and plan, leading to more precise budgeting. Additionally, GenAI can provide unique insights into seasonality. For example, if a resource demand spikes every three days at other unpredictable frequencies, AI can help you detect these patterns so you can optimize by scaling up when required and back down to save costs during lulls in demand. This kind of pattern detection is difficult without AI. It all goes back to the concept of understanding value and total cost. With FinOps, IT leaders can demonstrate exactly what they spend on and why. They can point out how the budget for software licenses and labor is directly tied to IT operations outcomes, translating into greater resiliency and higher customer satisfaction. They can prove that they’ve spent money responsibly and that they should retain that level of funding because it makes the business run better. FinOps and AI advancements allow businesses to do more and go further than they ever could. Almost 65% of CFOs are integrating AI into their strategy. 


The convergence of human and machine in transforming business

To achieve a true collaboration between humans and machines, it is crucial to establish a clear understanding and definition of their respective roles. By emphasizing the unique strengths of AI while strategically addressing its limitations, organizations can create a synergy that maximizes the potential of both human expertise and machine capabilities. AI excels in data structuring, capable of transforming complex, unstructured information into easily searchable and accessible content. This makes it an invaluable tool for sorting through vast online datasets, including datasets, news articles, academic reports and other forms of digital content, extracting meaningful insights. Moreover, AI systems operate tirelessly, functioning 24/7 without the need for breaks or downtime. This "always on" nature ensures a constant state of productivity and responsiveness, enabling organizations to keep pace with the rapidly changing market. Another key strength of AI lies in its scalability. As data volumes continue to grow and the complexity of tasks increases, AI can be integrated into existing workflows and systems, allowing businesses to process and analyze vast amounts of information efficiently.


The Crucial Role of Real-time Analytics in Modern SOCs

Security analysts often spend considerable time manually correlating diverse data sources to understand the context of specific alerts. This process leads to inefficiency, as they must scan various sources, determine if an alert is genuine or a false positive, assess its priority, and evaluate its potential impact on the organization. This tedious and lengthy process can lead to analyst burnout, negatively impacting SOC performance. ... Traditional Security Information and Event Management (SIEM) systems in SOCs struggle to effectively track and analyze sophisticated cybersecurity threats. These legacy systems often burden SOC teams with false positives and negatives. Their generalized approach to analytics can create vulnerabilities and strain SOC resources, requiring additional staff to address even a single false positive. In contrast, real-time analytics or analytics-driven SIEMs offer superior context for security alerts, sending only genuine threats to security teams. ... Staying ahead of potential threats is crucial for organizations in today's landscape. Real-time threat intelligence plays a vital role in proactively detecting threats. Through continuous monitoring of various threat vectors, it can identify and stop suspicious activities or anomalies before they cause harm.


Architecting with AI

Every project is different, and understanding the differences between projects is all about context. Do we have documentation of thousands of corporate IT projects that we would need to train an AI to understand context? Some of that documentation probably exists, but it's almost all proprietary. Even that's optimistic; a lot of the documentation we would need was never captured and may never have been expressed. Another issue in software design is breaking larger tasks up into smaller components. That may be the biggest theme of the history of software design. AI is already useful for refactoring source code. But the issues change when we consider AI as a component of a software system. The code used to implement AI is usually surprisingly small — that's not an issue. However, take a step back and ask why we want software to be composed of small, modular components. Small isn't "good" in and of itself. ... Small components reduce risk: it's easier to understand an individual class or microservice than a multi-million line monolith. There's a well-known paper(link is external) that shows a small box, representing a model. The box is surrounded by many other boxes that represent other software components: data pipelines, storage, user interfaces, you name it. 


Hungry for resources, AI redefines the data center calculus

With data centers near capacity in the US, there’s a critical need for organizations to consider hardware upgrades, he adds. The shortage is exacerbated because AI and machine learning workloads will require modern hardware. “Modern hardware provides enhanced performance, reliability, and security features, crucial for maintaining a competitive edge and ensuring data integrity,” Warman says. “High-performance hardware can support more workloads in less space, addressing the capacity constraints faced by many data centers.” The demands of AI make for a compelling reason to consider hardware upgrades, adds Rob Clark, president and CTO at AI tool provider Seekr. Organizations considering new hardware should pull the trigger based on factors beyond space considerations, such as price and performance, new features, and the age of existing hardware, he says. Older GPUs are a prime target for replacement in the AI era, as memory per card and performance per chip increases, Clark adds. “It is more efficient to have fewer, larger cards processing AI workloads,” he says. While AI is driving the demand for data center expansion and hardware upgrades, it can also be part of the solution, says Timothy Bates, a professor in the University of Michigan College of Innovation and Technology. 


How to Bake Security into Platform Engineering

A key challenge for platform engineers is modernizing legacy applications, which include security holes. “Platform engineers and CIOs have a responsibility to modernize by bridging the gap between the old and new and understanding the security implications between the old and new,” he says. When securing the software development lifecycle, organizations should secure both continuous integration and continuous delivery/continuous deployment pipelines as well as the software supply chain, Mercer says. Securing applications entails “integrating security into the CI/CD pipelines in a seamless manner that does not create unnecessary friction for developers,” he says. In addition, organizations must prioritize educating employees on how to secure applications and software supply chains. ... As part of baking security into the software development process, security responsibility shifts from the cybersecurity team to the development organization. That means security becomes as much a part of deliverables as quality or safety, Montenegro says. “We see an increasing number of organizations adopting a security mindset within their engineering teams where the responsibility for product security lies with engineering, not the security team,” he says.



Quote for the day:

“If you really want to do something, you will work hard for it.” -- Edmund Hillary