Daily Tech Digest - July 09, 2024

AI stack attack: Navigating the generative tech maze

Successful integration often depends on having a solid foundation of data and processing capabilities. “Do you have a real-time system? Do you have stream processing? Do you have batch processing capabilities?” asks Intuit’s Srivastava. These underlying systems form the backbone upon which advanced AI capabilities can be built. For many organizations, the challenge lies in connecting AI systems with diverse and often siloed data sources. Illumex has focused on this problem, developing solutions that can work with existing data infrastructures. “We can actually connect to the data where it is. We don’t need them to move that data,” explains Tokarev Sela. This approach allows enterprises to leverage their existing data assets without requiring extensive restructuring. Integration challenges extend beyond just data connectivity. ... Security integration is another crucial consideration. As AI systems often deal with sensitive data and make important decisions, they must be incorporated into existing security frameworks and comply with organizational policies and regulatory requirements.


How to Architect Software for a Greener Future

Firstly, it’s a time shift, moving to a greener time. You can use burstable or flexible instances to achieve this. It’s essentially a sophisticated scheduling problem, akin to looking at a forecast to determine when the grid will be greenest—or conversely, how to avoid peak dirty periods. There are various methods to facilitate this on the operational side. Naturally, this strategy should apply primarily to non-demanding workloads. ... Another carbon-aware action you can take is location shifting—moving your workload to a greener location. This approach isn’t always feasible but works well when network costs are low, and privacy considerations allow. ... Resiliency is another significant factor. Many green practices, like autoscaling, improve software resilience by adapting to demand variability. Carbon awareness actions also serve to future-proof your software for a post-energy transition world, where considerations like carbon caps and budgets may become commonplace. Establishing mechanisms now prepares your software for future regulatory and environmental challenges.


Evaluating board maturity: essential steps for advanced governance

Most boards lack a firm grasp of fundamental governance principles. I'd go so far as to say that 8 or 9 out of 10 boards could be described this way. Your average board director is intelligent and respected within their communities. But they often don't receive meaningful governance training. Instead, they follow established board norms without questioning them, which can lead to significant governance failures. Consider Enron, Wells Fargo, Volkswagen AG, Theranos, and, recently, Boeing—all have boards filled with recognized experts. However inadequate oversight caused or allowed them to make serious and damaging errors. This is most starkly illustrated by Barney Frank, co-author of the Dodd-Frank Act (passed following the 2008 financial crisis) and a board member of Silicon Valley Bank while it collapsed. Having brilliant board members doesn't guarantee effective governance. The point is that, for different reasons, consultants and experts can 'misread' where a board is at. Frankly, this is most often due to just being lazy. But sometimes it is due to just not being clear about what to look for.


Mastering Serverless Debugging

Feature flags allow you to enable or disable parts of your application without deploying new code. This can be invaluable for isolating issues in a live environment. By toggling specific features on or off, you can narrow down the problematic areas and observe the application’s behavior under different configurations. Implementing feature flags involves adding conditional checks in your code that control the execution of specific features based on the flag’s status. Monitoring the application with different flag settings helps identify the source of bugs and allows you to test fixes without affecting the entire user base. ... Logging is one of the most common and essential tools for debugging serverless applications. I wrote and spoke a lot about logging in the past. By logging all relevant data points, including inputs and outputs of your functions, you can trace the flow of execution and identify where things go wrong. However, excessive logging can increase costs, as serverless billing is often based on execution time and resources used. It’s important to strike a balance between sufficient logging and cost efficiency. 


Implementing Data Fabric: 7 Key Steps

As businesses generate and collect vast amounts of data from diverse sources, including cloud services, mobile applications, and IoT devices, the challenge of managing, processing, and leveraging this data efficiently becomes increasingly critical. Data fabric emerges as a holistic approach to address these challenges by providing a unified architecture that integrates different data management processes across various environments. This innovative framework enables seamless data access, sharing, and analysis across the organization irrespective of where the data resides – be it on-premises or in multi-cloud environments. The significance of data fabric lies in its ability to break down silos and foster a collaborative environment where information is easily accessible and actionable insights can be derived. By implementing a robust data fabric strategy, businesses can enhance their operational efficiency, drive innovation, and create personalized customer experiences. Implementing a data fabric strategy involves a comprehensive approach that integrates various Data Management and processing disciplines across an organization.


Empowering Self-Service Users in the Digital Age

Ultimately, portals must strike the balance between freedom and control, which can be achieved by ensuring flexibility with role-based access control. Granting end users the freedom to deploy within a secure framework of predefined permissions creates an environment ripe for innovation within a robustly protected environment. This means users can explore, experiment and innovate without concerns about security boundaries or unnecessary hurdles. But of course, as with any project, organizations can’t afford to build something and consider that job done. Measuring success is ongoing. Metrics such as how often the portal is accessed, who uses what, which service catalogs are used and how the portal usage should be tracked, along with other relevant data will help point to any areas that need improvement. It is also important to remember that it is collaborative work between the platform team and end users. And in technology, there is always room for improvement. For instance, recent advances in AI/ML could soon be leveraged to analyze previously inaccessible datasets and generate smarter and faster decision-making.


Desperate for power, AI hosts turn to nuclear industry

As opposed to adding new green energy to meet AI’s power demands, tech companies are seeking power from existing electricity resources. That could raise prices for other customers and hold back emission-cutting goals, according The Wall Street Journal and other sources. According to sources cited by the WSJ, the owners of about one-third of US nuclear power plants are in talks with tech companies to provide electricity to new data centers needed to meet the demands of an artificial-intelligence boom. ... “The power companies are having a real problem meeting the demands now,” Gold said. “To build new plants, you’ve got to go through all kinds of hoops. That’s why there’s a power plant shortage now in the country. When we get a really hot day in this country, you see brownouts.” The available energy could go to the highest bidder. Ironically, though, the bill for that power will be borne by AI users, not its creators and providers. “Yeah, [AWS] is paying a billion dollars a year in electrical bills, but their customers are paying them $2 billion a year. That’s how commerce works,” Gold said.


Fake network traffic is on the rise — here’s how to counter it

“Attempting to homogenize the bot world and the potential threat it poses is a dangerous prospect. The fact is, it is not that simple, and cyber professionals must understand the issue in the context of their own goals...” ... “Cyber professionals need to understand the bot ecosystem and the resulting threats in order to protect their organizations from direct network exploitation, indirect threat to the product through algorithm manipulation, and a poor user experience, and the threat of users being targeted on their platform,” Cooke says. “As well as [understanding] direct security threats from malicious actors, cyber professionals need to understand the impact on day-to-day issues like advertising and network management from bot profiles as a whole,” she adds. “So cyber professionals must ensure that the problem is tackled holistically, protecting their networks, data and their users from this increasingly sophisticated threat. Measures to detect and prevent malicious bot activity must be built into new releases, and cyber professionals should act as educational evangelists for users to help them help themselves with a strong awareness of the trademarks of fake traffic and malicious profiles.” 


Researchers reveal flaws in AI agent benchmarking

Since calling the models underlying most AI agents repeatedly can increase accuracy, researchers can be tempted to build extremely expensive agents so they can claim top spot in accuracy. But the paper described three simple baseline agents developed by the authors that outperform many of the complex architectures at much lower cost. ... Two factors determine the total cost of running an agent: the one-time costs involved in optimizing the agent for a task, and the variable costs incurred each time it is run. ... Researchers and those who develop models have different benchmarking needs to those downstream developers who are choosing an AI to use their applications. Model developers and researchers don’t usually consider cost during their evaluations, while for downstream developers, cost is a key factor. “There are several hurdles to cost evaluation,” the paper noted. “Different providers can charge different amounts for the same model, the cost of an API call might change overnight, and cost might vary based on model developer decisions, such as whether bulk API calls are charged differently.”


10 ways to prevent shadow AI disaster

Shadow AI is practically inevitable, says Arun Chandrasekaran, a distinguished vice president analyst at research firm Gartner. Workers are curious about AI tools, seeing them as a way to offload busy work and boost productivity. Others want to master their use, seeing that as a way to prevent being displaced by the technology. Others became comfortable with AI for personal tasks and now want the technology on the job. ... shadow AI could cause disruptions among the workforce, he says, as workers who are surreptitiously using AI could have an unfair advantage over those employees who have not brought in such tools. “It is not a dominant trend yet, but it is a concern we hear in our discussions [with organizational leaders],” Chandrasekaran says. Shadow AI could introduce legal issues, too. ... “There has to be more awareness across the organization about the risks of AI, and CIOs need to be more proactive about explaining the risks and spreading awareness about them across the organization,” says Sreekanth Menon, global leader for AI/ML services at Genpact, a global professional services and solutions firm. 



Quote for the day:

“In matters of principle, stand like a rock; in matters of taste, swim with the current. ” -- Thomas Jefferson

Daily Tech Digest - July 08, 2024

How insurtech startups are addressing the challenges of slow processes in the insurance sector

Even though compliance and regulation are critical for the security of both the insurers and customers, the regulatory process could be quite long. Compliance requirements demand meticulous attention to detail and can significantly prolong the approval process for new products and services. Another factor can be risk aversion. It (risk aversion) within the industry fosters a culture of caution, where insurers are hesitant to embrace change and experiment with new approaches to product development and underwriting. ... One of the solutions for these industrial challenges lies in the collaboration of the insurance sector and the latest technologies. Insurtech solutions offer myriad innovative tools and technologies that promise to streamline product development and automate underwriting processes. One such solution gaining traction is artificial intelligence (AI) and machine learning algorithms, which can analyse vast amounts of data in real time to assess risk and expedite underwriting decisions. 


Transforming Business Practices Through Augmented Intelligence

While AI raises apprehensions about potential job displacement, viewing it solely as a threat overlooks its capacity to enhance human capabilities, as evidenced by historical technological advancements. Training and education play a key role in this process, as AI has become an integral part of our reality and must be harnessed to its full potential. It is essential to align the use of artificial intelligence with the overall strategy of the organization for smooth integration of applications with data, processes, and collaboration between stakeholders. In a landscape where the internet simplifies transactions, software provides tools, and AI leverages data to make informed decisions, training and education become crucial. ... At its core, technology has always revolved around processing data. When viewed through the lens of enterprise architecture, an AI-powered machine learning tool can adeptly craft roadmaps tailored for businesses. Through advanced AI analytics, automation, and recommendation systems, enterprise architecture facilitates more informed and expedited decision-making processes.


Request for proposal vs. request for partner: what works best for you?

An RFProposal is an efficient choice when the nature of the work is standardized, while an RFPartner is the better choice when the buying organization is seeking a strategic partner for the overall best fit to meet its needs. ... When organizations shift to wanting to find a partner with the best possible solution, it’s important to understand the nature of the selection criteria change. With an RFPartner, buyers evaluate suppliers not only based on technical capabilities but also on the best value of the solution. ... “On the surface, an RFPartner sounds like a heavy lift, but we find that the overall time and effort is about the same,” he says. “In an RFProposal, the buyer is spending more time upfront defining the specs and in contentious negotiations. The RFPartner process flips this on its head and creates a more integrated bid solution that generates better solutions, spending more time together with the supplier co-creating, especially if your aim is making the shift to a highly collaborative vested business model to achieve strategic business outcomes.”


If you’re a CISO without D&O insurance, you may need to fight for it

D&O insurance covers the personal liabilities of corporate directors and officers in the event of incidents that lead to financial losses, reputational damage, or legal consequences. Without adequate D&O coverage, CISOs are left vulnerable, highlighting the need for this in an organization’s risk-management strategy. ... Lisa Hall, CISO at privately held Safebase, agrees that CISOs at all companies should be covered under their organizations’ D&O insurance policies, particularly in light of these new regulations. “I do think adding CISOs to D&O insurance will be more and more of a thing, and there is, for sure, more chatter in my CISO groups about how companies are handling this,” she says. “A lot of CISOs are also taking out errors and omissions insurance personally. I have that just for the consulting and advisory work I do.” ... “A lot of CISOs are thinking about this, especially after SolarWinds,” she says. “And if we feel that we’re not 100% protected for any decision we make, and we can be personally liable for a breach or possible incident even if we do the right thing, it’s really pushing CISOs to say, ‘Hey, company, I’ll join if you cover me or give me a different title.’ “


How DORA is fortifying Europe’s financial future with a new take on operational resilience

For DORA, digital operational resilience very simply means “the ability of a financial entity to build, assure, and review its operational integrity and reliability by ensuring, either directly or indirectly through the use of services provided by ICT third-party service providers, the full range of ICT-related capabilities needed to address the security of the network and information systems which a financial entity uses, and which support the continued provision of financial services and their quality, including throughout disruptions”. Developing on this statement in a conversation with FinTech Futures, Simon Treacy, a senior associate at global law firm Linklaters, describes DORA as “a very prescriptive framework for financial entities, primarily to build and improve the way that they manage ICT risk”. “It applies very broadly across the EU regulated financial sector,” he continues, “and really part of its aim is to harmonise standards so that the smallest payments firm is subject to the same rules for operational resilience as the biggest banks and insurers.”


Data Sprawl: Continuing Problem for the Enterprise or an Untapped Opportunity?

Data fabric technologies excel in integrating and managing data across various environments. However, they often focus on conventional data sources like databases, data lakes, or data warehouses. The result is a gap in integrating and extracting value from data residing in numerous SaaS applications, as they may not seamlessly fit into these traditional data repositories. The combined solution of data fabric and iPaaS can address complex business challenges, such as integrating data from SaaS applications with traditional data sources. This capability is particularly valuable in today’s business landscape, where data is increasingly scattered across various cloud and on-premises environments. The merging of data fabric and iPaaS technologies offers a groundbreaking solution to this challenge, opening the door to new opportunities in data management and analysis. The integration of data fabric with iPaaS addresses the complexity and expertise-dependency in iPaaS. Data fabric can enable users to discover, understand, and verify data before integration flows are built. 


AI’s moment of disillusionment

AI, whether generative AI, machine learning, deep learning, or you name it, was never going to be able to sustain the immense expectations we’ve foisted upon it. I suspect part of the reason we’ve let it run so far for so long is that it felt beyond our ability to understand. It was this magical thing, black-box algorithms that ingest prompts and create crazy-realistic images or text that sounds thoughtful and intelligent. And why not? The major large language models (LLMs) have all been trained on gazillions of examples of other people being thoughtful and intelligent, and tools like ChatGPT mimic back what they’ve “learned.” ... We go through this process of inflated expectations and disillusionment with pretty much every shiny new technology. Even something as settled as cloud keeps getting kicked around. My InfoWorld colleague, David Linthicum, recently ripped into cloud computing, arguing that “the anticipated productivity gains and cost savings have not materialized, for the most part.” I think he’s overstating his case, but it’s hard to fault him, given how much we (myself included) sold cloud as the solution for pretty much every IT problem.


How nation-state cyber attacks disrupt public services and undermine citizen trust

While nation-states do have advanced capabilities and visibility that are hard or impossible for cyber criminals to replicate, the general strategy for attackers is to target vulnerable perimeter devices such as VPNs or firewalls as an entry point to the network. Next they focus on obtaining privileged credentials while leveraging legitimate software to masquerade as normal activity while they scout the environments for valuable data or large repositories to disrupt. It’s important to note that the commonly exploited vulnerabilities in government IT systems are not distinctly different from the vulnerabilities exploited more broadly. Government IT systems are often extremely diverse and thus, subject to a variety of exploits. ... Currently, there are numerous policies and regulations, both domestically and internationally, which are inconsistent and vary in their requirements. These administrative requirements take significant resources which could otherwise be used to strengthen a company’s cybersecurity program. 


How Quantum Computing Will Revolutionize Cloud Analytics

As we peer into the future of quantum computing in cloud analytics, the emphasis on collaboration and continuous innovation becomes undeniable. Integrating quantum technologies with cloud systems is not just a technological upgrade but a paradigm shift requiring robust partnerships across academia, industry, and government sectors. For instance, IBM’s quantum network includes over 140 members, including start-ups, research labs, and educational institutions, working together to advance quantum computing. This collaborative model is essential because the challenges in quantum computing are not just about hardware or software alone but about creating an ecosystem that supports an entirely new kind of computing. That ecosystem comprises components such as quantum hardware development, quantum algorithms, software tools, and educational resources. Also, it has made significant achievements, such as developing quantum hardware such as the IBM Quantum System One, advancing quantum algorithms for practical applications in chemistry and materials science, and creating the Qiskit software development kit to make quantum programming more accessible.


How continuous learning is reshaping the workforce

Gone are the days when lengthy training programs were sought after and people took breaks from their careers to pick up an upskilling program. Navpreet Singh highlights that upskilling will become an ongoing process integrated into the workday. “The focus will shift from acquiring specific job skills to fostering adaptability and lifelong learning. Critical thinking, problem-solving, and creativity will be paramount as automation takes over routine tasks. Traditional ways of learning may not always reflect the skills needed. Alternative credentials, like badges and micro-credentials, will showcase the specific skills employees possess, making them more competitive. By embracing this future of upskilling, we can ensure our workforce is adaptable, future-proof, and ready to drive innovation in the ever-evolving automotive industry,” explains Singh. Within the next decade or so, we will see greater demand for agile ed-tech tools that help employees learn on the go and prepare them for new roles, says Daniele Merlerati, Chief Regional Officer APAC, Baltics, Benelux at Gi Group Holding.



Quote for the day:

"Perseverance is failing nineteen times and succeeding the twentieth." -- Julie Andrews

Daily Tech Digest - July 07, 2024

How Good Is ChatGPT at Coding, Really?

A study published in the June issue of IEEE Transactions on Software Engineering evaluated the code produced by OpenAI’s ChatGPT in terms of functionality, complexity and security. The results show that ChatGPT has an extremely broad range of success when it comes to producing functional code—with a success rate ranging from anywhere as poor as 0.66 percent and as good as 89 percent—depending on the difficulty of the task, the programming language, and a number of other factors. While in some cases the AI generator could produce better code than humans, the analysis also reveals some security concerns with AI-generated code. ... Overall, ChatGPT was fairly good at solving problems in the different coding languages—but especially when attempting to solve coding problems that existed on LeetCode before 2021. For instance, it was able to produce functional code for easy, medium, and hard problems with success rates of about 89, 71, and 40 percent, respectively. “However, when it comes to the algorithm problems after 2021, ChatGPT’s ability to generate functionally correct code is affected. It sometimes fails to understand the meaning of questions, even for easy level problems,” Tang notes.


What can devs do about code review anxiety?

A lot of folks reported that either they would completely avoid picking up code reviews, for example. So maybe someone's like, “Hey, I need a review,” and folks are like, “I'm just going to pretend I didn't see that request. Maybe somebody else will pick it up.” So just kind of completely avoiding it because this anxiety refers to not just getting your work reviewed, but also reviewing other people's work. And then folks might also procrastinate, they might just kind of put things off, or someone was like, “I always wait until Friday so I don't have to deal with it all weekend and I just push all of that until the very last minute.” So definitely you see a lot of avoidance. ... there is this misconception that only junior developers or folks just starting out experience code review anxiety, with the assumption that it's only because you're experiencing the anxiety when your work is being reviewed. But if you think about it, anytime you are a reviewer, you're essentially asked to contribute your expertise and so there is an element of, “If I mess up this review, I was the gatekeeper of this code. And if I mess it up, that might be my fault.” So there's a lot of pressure there.
 

Securing the Growing IoT Threat Landscape

What’s clear is that there should be greater collective responsibility between stakeholders to improve IoT security outlooks. A multi-stakeholder response is necessary, leading to manufacturers prioritising security from the design phase, to governments implementing legislation to mandate responsibility. Currently, some of the leading IoT issues relate to deployment problems. Alex suggests that IT teams also need to ensure default device passwords are updated and complex enough to not be easily broken. Likewise, he highlights the need for monitoring to detect malicious activity. “Software and hardware hygiene is essential, especially as IoT devices are often built on open source software, without any convenient, at scale, security hardening and update mechanisms,” he highlights. “Identifying new or known vulnerabilities and having an optimised testing and deployment loop is vital to plug gaps and prevent entry from bad actors.” A secure-by-design approach should ensure more robust protections are in place, alongside patching and regular maintenance. Alongside this, security features should be integrated from the start of the development process.


Beyond GPUs: Innatera and the quiet uprising in AI hardware

“Our neuromorphic solutions can perform computations with 500 times less energy compared to conventional approaches,” Kumar stated. “And we’re seeing pattern recognition speeds about 100 times faster than competitors.” Kumar illustrated this point with a compelling real-world application. ... Kumar envisions a future where neuromorphic chips increasingly handle AI workloads at the edge, while larger foundational models remain in the cloud. “There’s a natural complementarity,” he said. “Neuromorphics excel at fast, efficient processing of real-world sensor data, while large language models are better suited for reasoning and knowledge-intensive tasks.” “It’s not just about raw computing power,” Kumar observed. “The brain achieves remarkable feats of intelligence with a fraction of the energy our current AI systems require. That’s the promise of neuromorphic computing – AI that’s not only more capable but dramatically more efficient.” ... As AI continues to diffuse into every facet of our lives, the need for more efficient hardware solutions will only grow. Neuromorphic computing represents one of the most exciting frontiers in chip design today, with the potential to enable a new generation of intelligent devices that are both more capable and more sustainable.


Artificial intelligence in cybersecurity and privacy: A blessing or a curse?

AI helps cybersecurity and privacy professionals in many ways, enhancing their ability to protect systems, data, and users from various threats. For instance, it can analyse large volumes of data, spot anomalies, and identify suspicious patterns for threat detection, which helps to find unknown or sophisticated attacks. AI can also defend against cyber-attacks by analysing and classifying network data, detecting malware, and predicting vulnerabilities. ... The harmful effects of AI may be fewer than the positive ones, but they can have a serious impact on organisations that suffer from them. Clearly, as AI technology advances, so do the strategies for both protecting and compromising digital systems. Security professionals should not ignore the risks of AI, but rather prepare for them by using AI to enhance their capabilities and reduce their vulnerabilities. ... As attackers are increasingly leveraging AI, integrating AI defences is crucial to stay ahead in the cybersecurity game. Without it, we risk falling behind.” Consequently, cybersecurity and privacy professionals, and their organisations, should prepare for AI-driven cyber threats by adopting a multi-faceted approach to enhance their defences while minimising risks and ensuring ethical use of technology.


Intel is betting big on its upcoming Lunar Lake XPUs to change how we think of AI in our PCs

Designed with power efficiency in mind, the Lunar Lake architecture is ideal for portable devices such as laptops and notebooks. These processors balance performance and efficiency by integrating Performance Cores (P-cores) and Efficiency Cores (E-cores). This combination allows the processors to handle both demanding tasks and less intensive operations without draining the battery. The Lunar Lake processors will feature a configuration of up to eight cores, split equally between P-cores and E-cores. This design aims to improve battery life by up to 60 per cent, positioning Lunar Lake as a strong competitor to ARM-based CPUs in the laptop market. Intel anticipates that these will be the most efficient x86 processors it has ever developed. ... A major highlight of the Lunar Lake processors is the inclusion of the new Xe2 GPUs as integrated graphics. These GPUs are expected to deliver up to 80 per cent better gaming performance compared to previous generations. With up to eight second-generation Xe-cores, the Xe2 GPUs are designed to support high-resolution gaming and multimedia tasks, including handling up to three 4K displays at 60 frames per second with HDR.


Cyber Threats And The Growing Complexity Of Cybersecurity

Irvine envisions a future where the cybersecurity industry undergoes significant disruption, with a greater emphasis on data-driven risk management. “The cybersecurity industry is going to be disrupted severely. We start to think about cybersecurity more as a risk and we start to put more data and more dollars and cents around some of these analyses,” she predicted. As the industry matures, Dr. Irvine anticipates a shift towards more transparent and effective cybersecurity solutions, reducing the prevalence of smoke and mirrors in the marketplace. She also claims that “AI and LLM's will take over jobs. There will be automation, and we're going to need to upskill individuals to solve some of these hard problems. It's just a challenge for all of us to figure out how.” Kosmowski also remarked that the industry must remain on top of what will continue to be a definitive risk to organizations, “Over 86% of companies are hybrid and expect to remain hybrid for the foreseeable future, plus we know IT proliferation is continuing to happen at a pace that we have never seen before.”


The blueprint for data center success: Documentation and training

In any data center, knowledge is a priceless asset. Documenting configurations, network topologies, hardware specifications, decommissioning regulations, and other items mentioned above ensures that institutional knowledge is not lost when individuals leave the organization. So, no need to panic once the facility veteran retires, as you’ll already have all the information they have! This information becomes crucial for staff, maintenance personnel, and external consultants to understand every facet of the systems quickly and accurately. It provides a more structured learning path, facilitates a deeper understanding of the data center's infrastructure and operations, and allows facilities to keep up with critical technological advances. By creating a well-documented environment, facilities can rest assured knowing that authorized personnel are adequately trained, and vital knowledge is not lost in the shuffle, contributing to overall operational efficiency and effectiveness, and further mitigating future risks or compliance violations.


Why Knowledge Is Power in the Clash of Big Tech’s AI Titans

The advanced AI models currently under development across big tech -- models designed to drive the next class of intelligent applications -- must learn from more extensive datasets than the internet can provide. In response, some AI developers have turned to experimenting with AI-generated synthetic data, a risky proposition that could potentially put an entire engine at risk if even a small semblance of the learning model is inaccurate. Others have pivoted to content licensing deals for access to useful, albeit limited, proprietary training data. ... The real differentiating edge lies in who can develop a systemic means of achieving GenAI data validation, integrity, and reliability with a certificated or “trusted” designation, in addition to acquiring expert knowledge from trusted external data and content sources. These two twin pillars of AI trust, coupled with the raw computing and computational power of new and emerging data centers, will likely be the markers of which big tech brands gain the immediate upper hand.


Should Sustainability be a Network Issue?

The beauty of replacing existing network hardware components with energy-efficient, eco-friendly, small form factor infrastructure elements wherever possible is that no adjustments have to be made to network configurations and topology. In most cases, you're simply swapping out routers, switches, etc. The need for these equipment upgrades naturally occurs with the move to Wi-Fi 6, which requires new network switches, routers, etc., in order to run at full capacity. Hardware replacements can be performed on a phased plan that commits a portion of the annual budget each year for network hardware upgrades ... There is a need in some cases to have discrete computer networks that are dedicated to specific business functions, but there are other cases where networks can be consolidated so that resources such as storage and processing can be shared. ... Network managers aren’t professional sustainability experts—but local utility companies are. In some areas of the U.S., utility companies offer free onsite energy audits that can help identify areas of potential energy and waste reduction.



Quote for the day:

"It takes courage and maturity to know the difference between a hoping and a wishing." -- Rashida Jourdain

Daily Tech Digest - July 06, 2024

A CISO's Guide to Avoiding Jail After a Breach

The key to avoiding trouble as a security leader, Nall says, is awareness of three things: how government investigations work, how the government interacts with companies during the process, and the incentives companies have to resolve their cases in one way or another. When push comes to shove, for example, companies will be pressured to name and shame individuals. In his proceedings, Sullivan's legal team painted a picture of a company (Uber) trying to rebrand itself, and holding him up as a lamb to the slaughter. "It's very unfortunate because the consequences are faced by one individual, or a few individuals, although the ability to make sure that [an incident] doesn't happen is a community-based effort within organizations," says ArmorCode's Karthik Swarnam, formerly chief information security officer (CISO) of Kroger, DIRECTV, and TransUnion. To avoid being singled out (and because it's good security practice), CISOs should focus on building clear and robust lines of communication that bring other board members into the cybersecurity decision-making process.


How Pearson’s CIO manages technical debt

Keen to address this, Wells and the Pearson technology working group, which includes tech leadership from across the brand’s different organizations, came up with 12 key attributes, including security and maintainability, to rate their technology assets in a consistent way. These tech debt audits provided a clearer picture of where their biggest risks were, which, in turn, allowed them to prioritize what needs to be addressed first. “We developed an algorithm to measure our different applications based on these 12 categories so we can eliminate technical debt via a more strategic and standardized approach,” she says, noting that the goal was to do away with any guesswork and make decisions based on opportunities and potential revenue risks. ... As part of the process, she and her team needed to get the various leaders from across the business on board by making sure they understood that technical debt isn’t just a technology problem. “We really had to communicate that this is a priority, but we couldn’t do so by only talking to them about technology,” she says.


Strategic alignment in the age of AI: The 7 foundations of competitive success

The strategy must align with the capabilities of the organization and the competitive reality of the environment. Such an alignment has never been more important, as artificial intelligence (AI) and other changes disrupt industries and sectors. Before rushing to adopt the latest AI tool, whether it is deep learning or large language models, organizations must assess whether the new tech is strategically aligned. ... Aligning people with the desired strategic position and vision for the organization is critical. In high-performing organizations, employees and members understand their strategic mission and vision and are dedicated to achieving it. They become acolytes of their leaders and passionate advocates for their organizations. They see how their role contributes to the strategy of the organization and execute with a sense of purpose and teamwork. How many of your employees can articulate how your AI efforts advance your strategy? ... In truth, strategic alignment may be rare. If you are fortunate, you can recall a situation where alignment occurred, allowing you and your organization to achieve incredible heights. 


The AI Revolution Will Not Be Monopolized

Open source in AI and machine learning is not just about software, it's about the synergy of code and data. The growing ecosystem of open-source models encompasses everything from code to data and weights, making powerful tools widely accessible. ... The term "large language models" (LLMs) is often used broadly and imprecisely, muddying discussions about their capabilities and applications. The distinction between encoder models and large generative models is therefore very important. Encoder models involve task-specific networks that predict structured data, while large generative models rely on prompts to produce free-form text, necessitating additional logic to extract actionable insights. ... Companies like OpenAI might dominate the market for user-facing products but not necessarily the AI and software components behind them. While user data is advantageous for improving human-facing products, it is less critical for enhancing the foundational machine-facing tasks. Gaining general knowledge doesn't require specific data, which is at the core of the innovation behind large generative models.


CISA Warns Chemical Facilities of Data Theft After Hacker Breached CSAT Security Tool via Ivanti

CISA says that all information in the CSAT tool was encrypted using AES 256 algorithm, and the keys were also inaccessible “from the type of access the threat actor had to the system.” The agency also found “no evidence of credentials being stolen.” However, impacted organizations should assume data theft “out of abundance of caution” and assume that “that this information could have been inappropriately accessed,” the agency said. The agency also stated that even without data theft, the intrusion “met the threshold of a major incident under the Federal Information Security Modernization Act (FISMA),” given the number of individuals and chemical facilities impacted. Subsequently, CISA directed impacted chemical facilities to maintain cyber and physical security measures to prevent potential attacks as a result of the cyber incident. Similarly, CISA encourages individuals who had CSAT accounts to reset their passwords for all online accounts that share the same password to prevent future password spraying attacks.


Autonomous Vehicles Can Make All Cars More Efficient

To illustrate how the technology works, the team installed a traffic signal along the demonstration pathway. Gankov says an actual traffic-light timer from a traffic-signal cabinet was connected to a TV screen, providing a visual for attendees. A dedicated short range communications (DRSC) radio was also attached, broadcasting the signal’s phase and timing information to the vehicle. This setup enabled the vehicle to anticipate the traffic light’s actions far more accurately than a human driver could. ... These autonomous driving strategies can lead to significant energy savings, benefiting not just the autonomous vehicles themselves, but also the entire traffic ecosystem. “In a regular traffic situation, autonomous vehicles operating in ecomode influence the driving behavior of all the cars behind them,” says Gankov. “The result is that even vehicles with Level 0 autonomy use fuel more sparingly.” ... Employing techniques like efficient highway merging were key strategies in their approach to making the most of each tank of fuel or battery charge. 


SyncBack is the best free backup software on Windows and everyone should use it

On top of changing the type of sync or backup you want to perform, SyncBack also has a ton of configuration options for almost everything about how it works. By default, each profile you create is a manual backup, so whenever you want to backup your files, you just run the profile. Even then, if you want to run it on demand, you can create a hotkey to quickly run a specific profile and run it at any time. But if you're a "set it and forget it" type of person, then you can also automate the backups. SyncBack uses the Task Scheduler in Windows to allow you to create a scheduled backup with whatever frequency you prefer. ... Since each profile is only meant to sync one folder, if you want to sync files in completely different locations, you'll need separate profiles, but you can create group profiles so that all the profiles within are run at the same time, rather than sequentially. You can also enable things like compression for files copied, decrypt files when they're copied, change whether you want files to be copied when syncing two folders or if the mismatched file should be deleted, and even enable a rudimentary form of ransomware detection. 


From Scalability to Speed: Generative AI has Put Testing on Steroids

In the past, testing quality has been a big concern, necessitating early integration of QA into the development life cycle. Now, with GenAI, the focus has advanced beyond simply assurance to actively engineering quality. The key distinction lies in the approach — classic AI involves human intervention and manual processes, while GenAI automates and innovates testing methodologies. Consider dealing with requirement quality early in the software development life cycle. Using classic AI, a business analyst might define requirements to cover various interpretations, which may lead to certain ambiguity and potential failures. ... using GenAI is not about replacing the human workforce, but enhancing our capabilities. The shortage of senior automation testers results in a loss in business revenue. However, with GenAI, junior engineers can now harness the power of gen AI-enabled automation, performing tasks with the built-in knowledge of a seasoned architect. Gen AI’s prowess is not arbitrary; it has learned from billions of data points. By combining traditional knowledge with AI capabilities, new solutions bring scalability and speed to testing.
 

How organisations can thrive with resilience and empathy

While many organisations embrace the value of empathy, they often fall short in delivering it with genuine sincerity. Superficial expressions of empathy without meaningful actions, including consistent recognition, lead to employee dissatisfaction and high turnover rates. However, demonstrating sincere empathy and providing meaningful recognition can be challenging. Leaders sometimes face criticism for being "too considerate," particularly when their decisions appear to benefit employees during setbacks disproportionately. This dynamic can result in empathy fatigue, where the constant demand for empathetic responses and recognition strains HR professionals and leaders. ... Change can only occur when an organisation adopts the principles of nimble resilience and empathy, using them to shape policies, programs, and workplace culture. This approach encourages employees to build relationships, find new solutions, work collaboratively across disciplines, and embrace a forward-thinking perspective. As a result, trust in leaders and team members increases, along with connections to the organisation and its purpose. 


Mind the Gap: The Product in Data Product Is Reliability

The data product concept has been fleshed out in recent years with definitions, reference architectures, and platforms. They consist of … actually, let’s not worry about what data products consist of. At least, not right now. That’s not the important part. Instead, let’s start where we should always start: the consumer. ... Your assurance that its contents are always correct is the most significant distinguishing characteristic of a data product. You provide the ongoing validation, certification, and research so that your users don’t have to. You ensure that the data product is kept current with new arriving data. You continuously monitor its data quality. In addition to content, you must also be concerned with semantics. Changes in the business as implemented in the source systems and propagated through the data may necessitate changes to the data product. ... Technology can facilitate, but technology alone is not remotely sufficient. I’ve seen the data product label slapped on data marts, summary tables, and even raw data with none of the curation or monitoring. 



Quote for the day:

"The only limit to our realization of tomorrow will be our doubts of today." -- Frank D Roosevelt

Daily Tech Digest - July 05, 2024

AI washing: Silicon Valley’s big new lie

The cumulative effect of AI washing is that it leads both the public and the technology industry astray. It fuels the delusion that AI can do things it cannot do. It makes people think AI is some kind of all-purpose solution to every problem — or a slippery slope into dystopia, depending on one’s worldview. AI washing incentivizes inferior solutions, focusing on “magic” rather than quality. Claims that your dog-washing hose is “powered by AI” doesn’t mean you end up with a cleaner dog. It just means you have an overpriced hose. AI washing warps funding. Silicon Valley investment nowadays is totally captured by both actual AI and AI-washing solutions. Even savvy investors may overlook AI-washing exaggeration and lies knowing that the AI story will sell in the marketplace thanks to buyer naiveté. The biggest problem, however, is not delusional selling by the industry, but self-delusion. Purveyors of AI solutions believe that human help is a badge of shame, when in fact I think human involvement would be received with relief. People actually want humans involved in their shopping and driving experience.


Healing cyber wounds in global healthcare

Since AI technology has advanced and medical device security lags, the ease of attack and the potential reward for doing so have made healthcare institutions too tempting to ignore. The Office of Civil Rights (OCR) at Health and Human Services (HHS) is investigating the Change Healthcare attack to understand how it happened. The investigation will address whether Change Healthcare followed HIPAA rules. ... The financial impact of cyberattacks on healthcare providers can be devastating. The Change Healthcare breach led to significant cash flow disruptions, with providers losing millions daily. In response to this crisis, industry leaders and political figures have called for federal funding to support healthcare providers and ensure the continuity of essential services. The Senate majority leader and the American Hospital Association (AHA) have urged the federal government to provide financial assistance to mitigate the impact of the cyberattack, including accelerated and advanced payments to hospitals, pharmacies, and other affected entities. This federal funding can help healthcare providers adopt advanced security measures and recover from the financial impact of cyberattacks. 


The next 10 years for cloud computing

The anticipated productivity gains and cost savings have not materialized, for the most part. The promised efficiencies did not translate into significant improvements in operational productivity for many organizations, and cloud platforms cost at least twice as much as traditional systems. The sharp decline in the costs of on-premises computing and storage servers during the past decade exacerbated the situation for public cloud providers. This threw a monkey wrench into the savings that the cloud promised over traditional on-premises systems. ... Cloud providers are now faced with “cloud exit” issues while focusing on AI growth. Their market continues to stagnate as enterprises find that a mix of on-premises and cloud platforms is perhaps more cost-effective, considering the operational costs of AI. In other words, AI is delaying the reality they would otherwise likely face in the short term. ... The days of enterprises buying cloud systems in haste left too many to repent at leisure. Vendors must better understand what enterprises should pay to find value and thus reduce the exodus to colocation providers, managed service providers, and enterprise data centers.


State of play: cloud in financial services

Banks are fully aware of the need for digital transformation and shifting legacy applications to the cloud in order to remain competitive, but enacting it across the entire banking value chain in a unified manner is not a simple task. Omdia’s 2024 IT Enterprise Insights Survey, for instance, shows that most retail banks have made some inroads into digital transformation, with respondents most likely to have made progress in adopting cloud services, but just 29% state that they have made significant progress. Many banks have taken a phased approach to digital transformation, often working with multiple product vendors. But there is a growing recognition that this approach brings its own challenges in terms of managing numerous vendors and roadmaps. ... Modernising the core banking system can be costly, time-consuming, and complex. However, anecdotal evidence suggests that banks are spending 85% on maintaining their existing core banking tech and the remainder on launching new products, which can be flipped once they have fully modernised their core, providing them with enormous scope to innovate.


What is dark fiber and is it right for your business?

The type of dark fiber available varies between locations. So-called metro dark fiber, typically found in built-up urban areas, tends to comprise larger cables with a higher fiber count, which means they offer more flexibility and different types of connection, such as point-to-multipoint, where a cable can service multiple destinations. Long-haul dark fiber, in contrast, is often constructed using single-mode fiber which has a smaller glass core, and as such is likely to only offer more simple, point-to-point, connections. However, there are no significant distance limitations on dark fiber, meaning it can be used to connect sites in locations many miles apart. Dark fiber can be an alluring solution for businesses with rapidly evolving or highly variable networking needs. Users can choose when and how to scale up bandwidth to meet the demands of their organization without having to wait for their ISP to carry out this process. It also avoids the limitations of a contract with an ISP, which will likely dictate the available data transfer rates and impose fees for network upgrades.


Examining the Risks of IT Hero Culture

In an IT hero culture, individual accomplishments are celebrated over teamwork, with a high value placed on swift responses and constant availability. This type of workplace includes a small group of individuals who bear a disproportionate responsibility for critical tasks and decision making. Typically, this culture appears in organizations lacking formal processes, requiring these so-called heroes to work extensive hours to maintain operations. ... IT hero culture —despite its immediate benefits—often proves to be a short-term solution with significant long-term drawbacks. When these indispensable individuals are absent, organizations face bottlenecks and inefficiencies. Transitioning to a process-driven culture enhances organizational effectiveness and efficiency, addressing these challenges. This transition, usually prompted by external stakeholders such as bankers, shareholders, and customers, as well as internal forces such as the board and senior management, moves away from overreliance on individual heroics to a more sustainable, team-oriented approach.


Will the cost of scaling infrastructure limit AI’s potential?

AI scaling, much like any other type of technology scaling is dependent on infrastructure. “You can’t do anything else unless you go up from the infrastructure stack,” Paul Roberts, director of Strategic Account at AWS, told VentureBeat. Roberts noted that there was a big explosion of gen AI that got started in late 2022 when ChatGPT first went public. While in 2022 it might not have been clear where the technology was headed, he said that in 2024 AWS has its hands around the problem very well. AWS in particular has invested significantly in infrastructure, partnerships and development to help enable and support AI at scale. ... The resources required to train increasingly bigger LLMs isn’t the only issue. Bresniker noted that after an LLM is created, the inference is continuously run on them and when that is running 24 hours a day, 7 days a week, the energy consumption is massive “What’s going to kill the polar bears is inference,” Bresniker said. ... According to Bresniker, one potential way to improve AI scaling is to include deductive reasoning capabilities, in addition to the current focus on inductive reasoning.


Smashing Silos With a Vulnerability Operations Center (VOC)

The responsibility for VM typically sits within the security operations center (SOC). The SOC is, after all, the frontline defense against cyberthreats, equipped with the tools, resources and processes to identify and mitigate vulnerabilities. Yet this strategy comes with its pitfalls, as SOC teams are already navigating a variety of responsibilities, from managing active threats to threat hunting. Enter VOC, offering an approach that complements the SOC by prioritizing prevention rather than just responding to incidents. This collaboration between the two means that if the VOC discovers a log4j vulnerability, for instance, the SOC team will be promptly notified. Then, the response team can swoop in if prevention fails. A VOC lets organizations manage vulnerabilities strategically and coherently, which ensures that the most serious threats are handled systematically. This specialized entity within an organization focuses on identifying, assessing and mitigating vulnerabilities in IT systems and networks. It acts as a central hub for vulnerability management, leveraging advanced tools and processes to continuously monitor for security weaknesses and coordinate response strategies.


Software Engineering, Startup Thinking

The challenge for organizations trying to adopt a more agile approach is that there are often simply too many silos, not enough skilled people, and a saturated technology market with too many tools. “Turning around a culture like this that prohibits scale is time-consuming and takes on average, three years to achieve,” he says. Given that the end goal of developing a more agile approach is to generate untrammeled innovation across an organization, getting the culture right is critical. ... Brial says he recommends fostering an environment where cross-functional teams bring together individuals from different departments like development, operations and security, to work collaboratively toward a common goal. This requires cross-training, where team members can gather knowledge and skills in areas beyond their core expertise. Developers learn about infrastructure and operations, while operations engineers gain insights into software development practices. “This cross-pollination of skills builds an understanding and sense of empathy between teams,” he says. Brial says every layer of an IT department should be moving toward “everything” as code, noting provisioning and deploying any type of software is costly, time-consuming and complex.


Logic bombs explained: Definition, examples, prevention

A logic bomb is a set of instructions embedded in a software system that, if specified conditions are met, triggers a malicious payload to take actions against the operating system, application, or network. The actual code that does the dirty work, sometimes referred to as slag code, might be a standalone application or be hidden within a larger program. ... The actual behavior of a logic bomb can range widely. When it comes to the insider threats that make up much of the logic bomb landscape, a few types of attack are particularly common, including file or hard drive deletions, either as a ransom threat or act of revenge, or data exfiltration, as part of a plan to use privileged information in future employment. ... The best way to sniff out malicious code that’s being embedded in your own software, either deliberately by a disgruntled employee or inadvertently in the form of a third-party library, is to bake secure coding practices, like those that are part of the DevSecOps philosophy, into your development pipeline. These practices are meant to ensure that any code passes security tests before it’s put into production, and would prevent a lone wolf insider attacker from unilaterally changing code in an insecure way.



Quote for the day:

"Each day you are leading by example. Whether you realize it or not or whether it's positive or negative, you are influencing those around you." -- Rob Liano

Daily Tech Digest - July 04, 2024

Understanding collective defense as a route to better cybersecurity

Organizations invoking collective defense to protect their IT and data assets will usually focus on sharing threat intelligence and coordinating threat response actions to counter malicious threat actors. Success depends on defining and implementing a collaborative cybersecurity strategy where organizations, both internally and externally, work together across industries to defend against targeted cyber threats. ... Putting this into practice requires organizations to commit to coordinating their cybersecurity strategies to identify, mitigate and recover from threats and breaches. This should begin with a process that defines the stakeholders who will participate in the collective defense initiative. These can include anything from private companies and government agencies to non-profits and Information Sharing and Analysis Centers (ISACs), among others. The approach will only work if it is based on mutual trust, so there is an important role for the use of mechanisms such as non-disclosure agreements, clearly defined roles and responsibilities and a commitment to operational transparency. 


Meaningful Ways to Reward Your IT Team and Its Achievements

With technology rapidly advancing, it's more important than ever to invest in personalized IT team skill development and employee well-being programs, which are a win-win for employees and the companies they work for, says Carrie Rasmussen, CIO at human resources software provider Dayforce, in an email interview. ... Synchronize rewards to project workflows, Felker recommends. If it's a particularly difficult time for the team -- tight deadlines, major changes, and other pressing issues -- he suggests scheduling rewards prior to the work's completion to boost motivation. "Having the team get a boost mid-stream on a project is likely to create an additional reservoir of mental energy they can draw from as the project continues," Felker says. ... It's also important to celebrate success whenever possible and to acknowledge that the outcome was the direct result of great teamwork. "Five minutes of recognition from the CEO in a company update or other forum motivates not only the IT team but the rest of the organization to strive for recognition," Nguyen says. He also advises promoting significant team achievements on LinkedIn and other major social platforms. "This will aid recruiting and retention efforts."


Deepfake research is growing and so is investment in companies that fight it

Manipulating human likeness, such as creating deepfake images, video and audio of people, has become the most common tactic for misusing generative AI, a new study from Google reveals. The most common reason to misuse the technology is to influence public opinion – including swaying political opinion – but it is also finding its way in scams, frauds or other means of generating profit. ... Impersonations of celebrities or public figures, for instance, are often used in investment scams while AI-generated media can also be generated to bypass identity verification and conduct blackmail, sextortion and phishing scams. As the primary data is media reports, the researchers warn that the perception of AI-generated misuse may be skewed to the ones that attract headlines. But despite concerns that sophisticated or state-sponsored actors will use generative AI, many of the cases of misuse were found to rely on popular tools that require minimal technical skills. ... With the threat of deepfakes becoming widespread, some companies are coming up with novel solutions that protect images online.


Building Finance Apps: Best Practices and Unique Challenges

By making compliance a central focus from day one of the development process, you maximize your ability to meet compliance needs, while also avoiding the inefficient process of retrofitting compliance features into the app later. For example, implementing transaction reporting after the rest of the app has been built is likely to be a much heavier lift than designing the app from the start to support that feature. ... The tech stack (meaning the set of frameworks and tools you use to build and run your app) can have major implications for how easy it is to build the app, how secure and reliable it is, and how well it integrates with other systems or platforms. For that reason, you'll want to consider your stack carefully, and avoid the temptation to go with whichever frameworks or tools you know best or like the most. ... Given the plethora of finance apps available today, it can be tempting to want to build fancy interfaces or extravagant features in a bid to set your app apart. In general, however, it's better to adopt a minimalist approach. Build the features your users actually want — no more, no less. Otherwise, you waste time and development resources, while also potentially exposing your app to more security risks.


OVHcloud blames record-breaking DDoS attack on MikroTik botnet

Earlier this year, OVHcloud had to mitigate a massive packet rate attack that reached 840 Mpps, surpassing the previous record holder, an 809 Mpps DDoS attack targeting a European bank, which Akamai mitigated in June 2020. ... OVHcloud says many of the high packet rate attacks it recorded, including the record-breaking attack from April, originate from compromised MirkoTik Cloud Core Router (CCR) devices designed for high-performance networking. The firm identified, specifically, compromised models CCR1036-8G-2S+ and CCR1072-1G-8S+, which are used as small—to medium-sized network cores. Many of these devices exposed their interface online, running outdated firmware and making them susceptible to attacks leveraging exploits for known vulnerabilities. The cloud firm hypothesizes that attackers might use MikroTik's RouterOS's "Bandwidth Test" feature, designed for network throughput stress testing, to generate high packet rates. OVHcloud found nearly 100,000 Mikrotik devices that are reachable/exploitable over the internet, making up for many potential targets for DDoS actors.


Set Goals and Measure Progress for Effective AI Deployment

Combining human expertise and AI capabilities to augment decision-making is an essential tenet in responsible AI principles. The current age of AI adoption should be considered a “coming together of humans and technology.” Humans will continue to be the custodians and stewards of data, which ties into Key Factor 2 about the need for high-quality data, as humans can help curate the relevant data sets to train an LLM. This is critical, and the “human-in-the-loop” facet should be embedded in all AI implementations to avoid completely autonomous implementations. Apart from data curation, this allows humans to take more meaningful actions when equipped with relevant insights, thus achieving better business outcomes. ... Addressing bias, privacy, and transparency in AI development and deployment is the pivotal metric in measuring its success. Like any technology, laying out guardrails and rules of engagement are core to this factor. Enterprises such as Accenture implement measures to detect and prevent bias in their AI recruitment tools, helping to ensure fair hiring practices. 


Site Reliability Engineering State of the Union for 2024

Automation remains at the core of SRE, with tools for container orchestration and infrastructure management playing a critical role. The adoption of containerization technologies such as Docker and Kubernetes has facilitated more efficient deployment and scaling of applications. In 2024, we can expect further advancements in automation tools that streamline the orchestration of complex microservices architectures, thereby reducing the operational burden on SRE teams. Infrastructure automation and orchestration are pivotal in the realm of SRE, enabling teams to manage complex systems with enhanced efficiency and reliability. The evolution of these technologies, particularly with the advent of containerization and microservices, has significantly transformed how applications are deployed, managed and scaled. ... With the increasing prevalence of cyberthreats and the tightening of regulatory requirements, security and compliance have become integral aspects of SRE. Automated tools for compliance monitoring and enforcement will become indispensable, enabling organizations to adhere to industry standards while minimizing the risk of data breaches and other security incidents.


5 Steps to Refocus Your Digital Transformation Strategy for Strategic Advancement

A strategy built around customer value provides measurable outcomes and drives deeper engagement and loyalty. The digital landscape is riddled with risks and opportunities due to rapid technological advancements, especially in data-centric AI. Businesses must stay agile, continually evaluating the risks and rewards of new technologies while maintaining a sharp focus on how these enhancements serve their customer base. ... Organizations with a customer advisory board should leverage it to gain insights directly from those who use their services or products. Engaging customers from the early stages of planning ensures that their feedback and needs directly influence the transformation strategy, leading to more accurate and beneficial implementations. ... One significant mistake IT leaders make is prioritizing technology over customer needs. While technology is a crucial enabler, it should not dictate the strategy. Instead, it should support and enhance the strategy’s core aim — serving the customer. IT leaders must ensure that digital initiatives align with broader business objectives and directly contribute to customer satisfaction and business efficiency.


OpenSSH Vulnerability “regreSSHion” Grants RCE Access Without User Interaction, Most Dangerous Bug in Two Decades

The good news about the OpenSSH vulnerability is that exploitation attempts have not yet been spotted in the wild. Successfully taking advantage of the exploit required about 10,000 tries to win a race condition using 100 concurrent connections under the researcher’s test conditions, or about six to eight hours to RCE due to obfuscation of ASLR glibc’s address. The attack will thus likely be limited to those wielding botnets when it is uncovered by threat actors. Given the large amount of simultaneous connections needed to induce the race condition, the RCE is also very open to being detected and blocked by firewalls and networking monitoring tools. Qualys’ immediate advice for mitigation also includes updating network-based access controls and segmenting networks where possible. ... “While there is currently no proof of concept demonstrating this vulnerability, and it has only been shown to be exploitable under controlled lab conditions, it is plausible that a public exploit for this vulnerability could emerge in the near future. Hence it’s strongly advised to patch this vulnerability before this becomes the case”.


New paper: AI agents that matter

So are AI agents all hype? It’s too early to tell. We think there are research challenges to be solved before we can expect agents such as the ones above to work well enough to be widely adopted. The only way to find out is through more research, so we do think research on AI agents is worthwhile. One major research challenge is reliability — LLMs are already capable enough to do many tasks that people want an assistant to handle, but not reliable enough that they can be successful products. To appreciate why, think of a flight-booking agent that needs to make dozens of calls to LLMs. If each of those went wrong independently with a probability of, say, just 2%, the overall system would be so unreliable as to be completely useless (this partly explains some of the product failures we’ve seen). ... Right now, however, research is itself contributing to hype and overoptimism because evaluation practices are not rigorous enough, much like the early days of machine learning research before the common task method took hold. That brings us to our paper.



Quote for the day:

"You can’t fall if you don’t climb. But there’s no joy in living your whole life on the ground." -- Unknown