Daily Tech Digest - June 05, 2024

How foundation agents can revolutionize AI decision-making in the real world

Some of the key characteristics of foundation models can help create foundation agents for the real world. First, LLMs can be pre-trained on large unlabeled datasets from the internet to gain a vast amount of knowledge. Second, the models can use this knowledge to quickly align with human preferences and specific tasks. ... Developing foundation agents presents several challenges compared to language and vision models. The information in the physical world is composed of low-level details instead of high-level abstractions. This makes it more difficult to create unified representations for the variables involved in the decision-making process. There is also a large domain gap between different decision-making scenarios, which makes it difficult to develop a unified policy interface for foundation agents. ... However, it can make the model increasingly complex and uninterpretable. While language and vision models focus on understanding and generating content, foundation agents must be involved in the dynamic process of choosing optimal actions based on complex environmental information.


Soon, LLMs Can Help Humans Communicate with Animals

Understanding non-human communication can be significantly aided by the insights provided by models like OpenAI’s GPT-3 and Google’s LaMDA, which are examples of such generative AI tools. ESP has recently developed the Benchmark for Animal Sounds, or BEANS for short, the first-ever benchmark for animal vocalisations. It established a standard against which to measure the performance of machine learning algorithms on bioacoustics data. On the basis of self-supervision, it has also created the Animal Vocalisation Encoder, or AVES. This is the first foundational model for animal vocalisations and can be applied to many other applications, including signal detection and categorisation. The nonprofit is just one of many groups that have recently emerged to translate animal languages. Some organisations, like Project Cetacean Translation Initiative (CETI), are dedicated to attempting to comprehend a specific species — in this case, sperm whales. CETI’s research focuses on deciphering the complex vocalisations of these marine mammals. DeepSqueak is another machine learning technique developed by University of Washington researchers Kevin Coffey and Russell Marx, capable of decoding rodent chatter.


AI supply is way ahead of AI demand

We’ve been in a weird wait-and-see moment for AI in the enterprise, but I believe we’re nearing the end of that period. Surely the boom-and-bust economics that Cahn highlights will help make AI more cost-effective, but ironically, the bigger driver may be lowered expectations. Once enterprises can get past the wishful thinking that AI will magically transform the way they do everything at some indeterminate future date, and instead find practical ways to put it to work right now, they’ll start to invest. No, they’re not going to write $200 billion checks, but it should pad the spending they’re already doing with their preferred, trusted vendors. The winners will be established vendors that already have solid relationships with customers, not point solution aspirants. Like others, The Information’s Anita Ramaswamy suggests that “companies [may be] holding off on big software commitments given the possibility that AI will make that software less necessary in the next couple of years.” This seems unlikely. More probable, as Jamin Ball posits, we’re in a murky economic period and AI has yet to turn into a tailwind. 


How AI-powered attacks are accelerating the shift to zero trust strategies

Afterall, a lack of in-house expertise and adequate budget are both largely within an organization’s control through funding for resources, tools, and training. So, if a gap exists at the top, help your senior leadership and board make the critical linkage between zero trust and strong corporate governance. An ever-intensifying threat landscape means senior leadership teams and boards have a duty of care to make the right investments and provide the strategic guidance and oversight to help keep the organization and its stakeholders safe. As further motivation to make this strategic link to zero trust, federal agencies are continuing efforts to hasten breach disclosures and hold executives liable for security and data privacy incidents. Beyond that, it is about sourcing and retaining top tech talent which speaks to the need to build and maintain an inclusive company culture with continuous training and development opportunities for technical teams. Ensuring security teams are inclusive of neurodiverse talent, for example, is important for encouraging the diverse ways of thinking needed to spot and curtail novel AI-powered attack strategies.


Cryptographers Discover a New Foundation for Quantum Secrecy

Working together, the four researchers quickly proved that Kretschmer’s state discrimination problem could still be intractable even for computers that could call on this NP oracle. That means that practically all of quantum cryptography could remain secure even if every problem underpinning classical cryptography turned out to be easy. Classical cryptography and quantum cryptography increasingly seemed like two entirely separate worlds. The result caught Ma’s attention, and he began to wonder just how far he could push the line of work that Kretschmer had initiated. Could quantum cryptography remain secure even with more outlandish oracles — ones that could instantly solve computational problems far harder than those in NP? “Problems in NP are not the hardest classical problems one can think about,” said Dakshita Khurana, a cryptographer at the University of Illinois, Urbana-Champaign. “There’s hardness beyond that.” Ma began brainstorming how best to approach that question, together with Alex Lombardi, a cryptographer at Princeton University, and John Wright, a quantum computing researcher at the University of California, Berkeley.


Data centers challenges in the German market

The potential for the German data center industry to meet growing demands is substantial, but there are hurdles that should not be underestimated. For instance, the tightening of legal regulations and technological barriers poses a risk that expansion efforts may shift focus to other European countries. In particular, the proposed Energy Efficiency Act presents a significant challenge. It mandates that data center operators supply their surplus heat to external customers, particularly local authorities and district heating providers. Ultimately, all operators will have to achieve the same decarbonization targets, and successful waste heat utilization requires seamless collaboration among all parties involved. Although the first district heating projects are already being implemented in urban areas, there is limited interest in connecting older high-calorific district heating networks to data centers with low-calorific surplus heat. ... Additional challenges in developing new projects include the scarcity of suitable land and renewable energy sources, as well as a lack of grid capacity in top-tier regions. As a result, future capacities can only be planned medium to long term. 


How to Attract the Right Talent for Your Engineering Team

As humans, we generally stick with (and hire) what we already know, but our unconscious biases can hinder creativity and hurt the company’s bottom line if left unchecked. The best way to prevent this is by including different types of people — from various departments and experience levels — in the interview process. Anyone we hire must have the ability to operate in a cross-functional manner, therefore it makes sense that the people they’ll be interacting with are included in the process. ... When assessing potential candidates, look for people who understand that they don’t always need to figure everything out themselves versus those who sell themselves as the best at their craft. This signals an aptitude for continued learning and problem-solving, as well as a willingness to collaborate in a team setting. Other good signs to look for include folks who talk about outcomes instead of just outputs, and acknowledge the importance of cross-functional collaboration in achieving outcome-driven success. ... Mentorship is a key ingredient for nurturing top talent, and it has the added benefit of increasing employee retention.


How data champions can boost regulatory compliance

At the heart of successful compliance are people. Utilising the organisational design of a company and working intimately with employees, by making them ‘Data champions’ organisations can empower staff to take responsibility for adherence. Too often companies place the responsibility on one person or department to ensure compliance. However, data champions working in specific departments throughout an organisation can have a much better overview of where the risk lies and what needs to be implemented to close vulnerabilities. Making compliance a part of everyday life or as it’s sometimes known ‘data protection by design and default’, means that it becomes a much more manageable task, rather than a daunting one. Alongside this, implementing a solution that can help manage the policies brought in to deal with data protection risks (and also keep a record of who owns the policies as well as, crucially, who has read and understood the policies) means that suddenly companies have a more accurate and comprehensive overview of how the company sits in terms of its adherence to regulation.


Beyond Chatbots: Meet Your New Digital Human Friends

Digital humans are important players in virtual and augmented reality environments, says Matt Aslett, a director with technology research and advisory firm ISG via email. "They are also used in gaming and animation and are being deployed as interactive customer service assistants, providing real-time conversational interfaces to access help and information in multiple industries, including retail and healthcare." ... Tomik believes there are times when people may feel more comfortable talking to a digital human than a real person, such as when seeking advice on discomforting issues, including addiction, anger management, relationship difficulties, and other deeply personal topics. "It allows people to feel comfortable asking for help without judgment," he notes. ... Digital human technology is evolving rapidly, but it's still far from being a complete replacement for human-to-human interaction. "Like AI chatbot interfaces, a digital human interface may struggle to detect nuanced communication traits, such as sarcasm, emotion, and deception, and may be unsuitable for dealing with complex and critical user requests," Aslett says.


Navigating Data Readiness for Generative AI

Technical challenges posed by data readiness for generative AI are - Insufficient data preparation: Anonymizing data is especially important for health and finance applications, but it also reduces an organization’s liability and helps it meet compliance requirements. Labeling the data is a form of annotation that identifies its context, sentiment, and other features for NLP and other uses. ... Finding the right size LLMs: Smaller models help companies reduce their resource consumption while making the models more efficient, more accurate, and easier to deploy. Organizations may start with large models for proof of concept and then gradually reduce their size while testing to ensure the model’s results remain accurate. ... Retrieval-augmented generation: This AI framework supplements the LLM’s internal representation of information with external sources of knowledge. ...Overcoming data silos: Data silos prevent data the model needs from being discovered, introduces incomplete data sets, and results in inaccurate reports while also driving up data-management costs. Preventing data silos entails identifying disconnected data, creating a data governance framework, promoting collaboration across teams, and establishing data ownership.



Quote for the day:

"Sometimes it takes a good fall to really know where you stand." -- Hayley Williams

Daily Tech Digest - June 04, 2024

Should Your Organization Use a Hyperscaler Cloud Provider?

Vendor lock-in is perhaps the biggest hyperscaler pitfall. "Relying too heavily on a single hyperscaler can make it difficult to move workloads and data between clouds in the future," Inamdar warns. Proprietary services and tight integrations with a particular hyperscaler cloud provider's ecosystem can also lead to lock-in challenges. Cost management also requires close scrutiny. "Hyperscalers’ pay-as-you-go models can lead to unexpected or runaway costs if usage isn't carefully monitored and controlled," Inamdar cautions. "The massive scale of hyperscaler cloud providers also means that costs can quickly accumulate for large workloads." Security and compliance are additional concerns. "While hyperscalers invest heavily in security, the shared responsibility model means customers must still properly configure and secure their cloud environments," Inamdar says. "Compliance with regulatory requirements across regions can also be complex when using global hyperscaler cloud providers." On the positive side, hyperscaler availability and durability levels exceed almost every enterprise's requirements and capabilities, Wold says.


Innovate Through Insight

The common core of both strategy and innovation is insight. An insight results from the combination of two or more pieces of information or data in a unique way that leads to a new approach, new solution, or new value. Mark Beeman, professor of psychology at Northwestern University, describes insight in the following way: “Insight is a reorganization of known facts taking pieces of seemingly unrelated or weakly related information and seeing new connections between them to arrive at a solution.” Simply put, an insight is learning that leads to new value. ... Innovation is the continual hunt for new value; strategy is ensuring we configure resources in the best way possible to develop and deliver that value. Strategic innovation can be defined as the insight-based allocation of resources in a competitively distinct way to create new value for select customers. Too often, strategy and innovation are approached separately, even though they share a common foundation in the form of insight. As authors Campbell and Alexander write, “The fundamental building block of good strategy is insight into how to create more value than competitors can.”


Managing Architectural Tech Debt

Architectural technical debt is a design or construction approach that's expedient in the short term, but that creates a technical context in which the same work requires architectural rework and costs more to do later than it would cost to do now (including increased cost over time). ... The shift-left approach embraces the concept of moving a given aspect closer to the beginning than at the end of a lifecycle. This concept gained popularity with shift-left for testing, where the test phase was moved to a part of the development process and not a separate event to be completed after development was finished. Shift-left can be implemented in two different ways in managing ATD:Shift-left for resiliency: Identifying sources that have an impact on resiliency, and then fixing them before they manifest in performance. Shift-left for security: Detect and mitigate security issues during the development lifecycle. Just like shift-left for testing, a prioritized focus on resilience and security during the development phase will reduce the potential for unexpected incidents.


Snowflake adopts open source strategy to grab data catalog mind share

The complexity and diversity of data systems, coupled with the universal desire of organizations to leverage AI, necessitates the use of an interoperable data catalog, which is likely to be open source in nature, according to Chaurasia. “An open-source data catalog addresses interoperability and other needs, such as scalability, especially if it is built on top of a popular table format as Iceberg. This approach facilitates data management across various platforms and cloud environments,” Chaurasia said. Separately, market research firm IDC’s research vice president Stewart Bond pointed out that Polaris Catalog may have leveraged Apache Iceberg’s native Iceberg Catalogs and added enterprise-grade capabilities to it, such as managing multiple distributed instances of Iceberg repositories, providing data lineage, search capability for data utilities, and data description capabilities among others. Polaris Catalog, which Snowflake expects to open source in the next 90 days, can be either be hosted in its proprietary AI Data Cloud or can be self-hosted in an enterprise’s own infrastructure using containers such as Docker or Kubernetes.


Is it Time for a Full-Stack Network Infrastructure?

When we talk about full-stack network infrastructure management, we aren’t referring to the seven-stack protocol layers upon which networks are built, but rather to how these various protocol layers and the applications and IT assets that run on top of them are managed. ... The key to choosing between a full-stack single network management solution or just a SASE solution that focuses on security and policy enforcement in a multi-cloud environment is whether you are most concerned that your network governance and security policies are uniform and enforced or if you're seeking a solution that is above and beyond just security and governance, and that can address the entire network management continuum—from security and governance to monitoring, configuration, deployment, and mediation. Further complicating the decision of how to best grow the network is the situation of network vendors themselves. Those that offer a full-stack, multi-cloud network management solution are in evolutionary stages themselves. They have a vision of their multi-cloud full-stack network offerings, but a complete set of stack functionality is not yet in place.


The expensive and environmental risk of unused assets

While critical loads are expected to be renewed, refreshed, or replaced over the lifetime of the data center facility, older, non-energy star certified, or inefficient servers that are still turned on but no longer being used continue to use both power and cooling resources. Stranded assets also include excessive redundancy or low utilization of the redundancy options, a lack of scalable, modular design, and the use of oversized equipment or legacy lighting and controls. While many may plan for the update and evolution of the ITE, the mismatch of power and cooling resources versus the equipment requiring the respective power and cooling inevitably results in stranded assets. ... Stranded capacity is wasted energy, cooling unnecessary equipment, and lost cooling to areas that need not be cooled. Stranded cooling capacity can include bypass air (supply air from cooling units that is not contributing to cooling the ITE), too much supply air being delivered from the cooling units, lack of containment, poor rack hygiene (missing blanking panels), unsealed openings under ITE with raised floors, just to name a few.


Architectural Trade-Offs: The Art of Minimizing Unhappiness

The critical skill in making trade-offs is being able to consider two or more potentially opposing alternatives at the same time. This requires being able to clearly convey alternatives so a team can decide which alternative, or neither, acceptably meets the QARs under consideration. What makes trade-off decisions particularly difficult is that the choice is not clear; the facts supporting the pro and con arguments are typically only partial and often inconclusive. If the choice was clear there would be no need to make a trade-off decision. ... Teams who are inexperienced in specific technologies will struggle to make decisions about how to best use those technologies. For example, a team may decide to use a poorer-fit technology such as a relational database to store a set of maps because they don’t understand the better-fit technology, such as a graph database, well enough to use it. Or they may be unwilling to take the hit in productivity for a few releases to get better at using a graph database.


New Machine Learning Algorithm Promises Advances in Computing

Compact enough to fit on an inexpensive computer chip capable of balancing on your fingertip and able to run without an internet connection, the team’s digital twin was built to optimize a controller’s efficiency and performance, which researchers found resulted in a reduction of power consumption. It achieves this quite easily, mainly because it was trained using a type of machine learning approach called reservoir computing. “The great thing about the machine learning architecture we used is that it’s very good at learning the behavior of systems that evolve in time,” Kent said. “It’s inspired by how connections spark in the human brain.” Although similarly sized computer chips have been used in devices like smart fridges, according to the study, this novel computing ability makes the new model especially well-equipped to handle dynamic systems such as self-driving vehicles as well as heart monitors, which must be able to quickly adapt to a patient’s heartbeat. “Big machine learning models have to consume lots of power to crunch data and come out with the right parameters, whereas our model and training is so extremely simple that you could have systems learning on the fly,” he said.


Getting infrastructure right for generative AI

“It was quite cost-effective at first to buy our own hardware, which was a four-GPU cluster,” says Doniyor Ulmasov, head of engineering at Papercup. He estimates initial savings between 60% and 70% compared with cloud-based services. “But when we added another six machines, the power and cooling requirements were such that the building could not accommodate them. We had to pay for machines we could not use because we couldn’t cool them,” he recounts. And electricity and air conditioning weren’t the only obstacles. ... Another factor working against unmitigated power consumption is sustainability. Many organizations have adopted sustainability goals, which power-hungry AI algorithms make it difficult to achieve. Rutten says using SLMs, ARM-based CPUs, and cloud providers that maintain zero-emissions policies, or that run on electricity produced by renewable sources, are all worth exploring where sustainability is a priority. For implementations that require large-scale workloads, using microprocessors built with field-programmable gate arrays (FPGAs) or application-specific integrated circuits (ASICs) are a choice worth considering.


Security Teams Want to Replace MDR With AI: A Good or Bad Idea?

“The first stand-out takeaway is the dissatisfaction with MDR systems across the board. A mix between high false positive rates and system inefficiencies is driving a shift for AI solutions, the driving factor being accuracy.” McStay said that the report’s findings that claim that AI has the potential to automate and decrease workloads by as much as 95% are “potentially inflated”. “I don’t think it will be that high in practice, but I would still expect a massive reduction in workload (circa 50-80%). Perhaps opening up a new conversation around where time should be spent best?” McStay added that she does believe replacing MDR with AI is “smart, and certainly what the future will look like”, based on accuracy and response time. ... “The catch is that nobody is ‘replacing’ anything, rather AI is being integrated solely for the purpose of expediting detection and response, which improves the signal-to-noise ratio for human operators drastically and makes for a far more effective SOC,” Hasse explained. When questioned whether it was a good idea to replace MDR with AI Hasse said security teams should not be replacing MDR services but rather augmenting them.



Quote for the day:

"Decision-making is a skill. Wisdom is a leadership trait." -- Mark Miller

Daily Tech Digest - June 03, 2024

What’s eating B2B SaaS

Recently however, there has been increasing speculation that large language models (LLMs) are a threat to the entire Software ecosystem. In an aptly named short essay titled “The End of Software”, venture capitalist Chris Paik of Pace Capital contends that can significantly lower the cost of software development and maintenance, leading to a proliferation of new, agile software solutions that could replace traditional SaaS models. Paik argues this shift may result in a fundamental rethinking of how software is built, sold, and consumed, potentially rendering existing B2B SaaS business models obsolete as the market transitions to AI agents. He goes so far as to say “Majoring in computer science today will be like majoring in journalism in the late 90’s”. ... Most SaaS is priced by the seat. Given there is a direct correlation between the workforce reduction and revenues, this easily equates to billions of dollars in lost recurring revenues across the industry. Indeed, one of the main benefits touted by SaaS companies was this ability to scale up and down as needed without commitment.


Deploying scalable modular data centers at the Edge

Requirements around the build-out of data centers have also led to a rethink of how these buildings need to be constructed. Building data centers at the Edge is the way to combat some of the challenges that the industry faces, Lindsey argues. “We see Edge as a way to activate infrastructure very quickly, where today, as you know, we have a wildly low vacancy rate,” he says. “That should continue through the next several years, and people still need data, so we see this as a way to scale out where we have now. We’re now on track over the next five years to be able to scale out at gigawatt scale.” ... “We wanted to create an integrated platform that allows for our customers to build and deploy data centers, with an experience that's much more akin to building and buying a car,” he explains, noting that Flexnode focuses on connecting three key parts. “The first part was an industrialized building system that is designed for disassembly, configurability, adaptability, and designed to go anywhere.” The second part, he says, is focused on its fully digitized process which helps its customers design and configure their data centers. The final part, he adds, is the ecosystem of partners Flexnode partners with, spanning engineering and construction.


Saudi entrepreneurs launch fintech startup to spur open banking growth in GCC

The projected growth of open banking in the Gulf Cooperation Council countries has motivated Rayan Azab and Salah Khashoggi to partner with Dubai-based fintech entrepreneur Ash Kalra to spearhead this venture after four years of market research. This comes as open banking is projected to account for over $124 billion worth of transactions in the GCC region alone by 2031, up from $14 billion in 2020, with an annual growth rate of 22 percent, according to a report by Allied Market Research. ... “Saudi Arabia has recently advanced its open banking initiatives and is poised to become a regional leader in open banking," he explained. Highlighting the potential impact of open banking growth in the GCC on their trajectory, Azab mentioned that the segment is already established in the region, and they are not introducing something entirely new. “We are just revamping it. Thimsa is going to come and help small businesses that cannot afford to just go and do the huge accounting or whatever,” he said, adding that they will be adding value to these businesses.


A Journey From the Foundations of Observability to Surviving Its Challenges at Scale

The amount of data generated in cloud-native environments, especially at scale, makes it impossible to continue collecting all data. This flood of data, the challenges that arise, and the inability to sift through the information to find the root causes of issues becomes detrimental to the success of development teams. It would be more helpful if developers were supported with just the right amount of data, in just the right forms, and at the right time to solve issues. One does not mind observability if the solution to problems are found quickly, situations are remediated faster, and developers are satisfied with the results. If this is done with one log line, two spans from a trace, and three metric labels, then that's all we want to see. To do this, developers need to know when issues arise with their applications or services, preferably before it happens. They start troubleshooting with data that has been determined by their instrumented applications to succinctly point to areas within the offending application. Any tooling allows the developer who's investigating to see dashboards reporting visual information that directs them to the problem and potential moment it started.


Faultless with serverless: Cloud best practices for optimized returns

The Single Responsibility Principle (SRP) is an essential rule to ensure the modularity and scalability of serverless computing. According to the rule, functions should be small, stateless, and have only one primary reason to modify. Stateless functions can easily scale up or down based on demand without any overheads of managing the state. ... An asynchronous, event-driven architecture is best suited for a serverless execution model. Serverless applications achieve resilience, scalability, and efficiency by decoupling components and handling the workloads asynchronously. The technique involves queues and event streams, where the tasks are offloaded and exclusively processed by serverless functions. ... With built-in monitoring solutions, organizations can track function invocations, durations, errors, and resource utilization. This helps them identify and resolve issues proactively and optimise opportunities. To understand this better, consider a serverless IoT platform. Through a strategic process for monitoring and observability, enterprises can remediate issues pertaining to data ingestion, processing, and delivery. 


Emerging Trends in Application Security Testing Services

The integration of security into DevOps practices, known as DevSecOps, continues to gain traction. DevSecOps emphasizes collaboration and communication between development, IT operations, and security teams. By automating security checks throughout the development pipeline, DevSecOps ensures that security is not a bottleneck but an integral part of the development process. This proactive approach significantly enhances the overall security posture of applications. ... Machine learning (ML) and artificial intelligence (AI) are revolutionizing application security testing. Advanced ML algorithms can analyze vast datasets to identify patterns and anomalies, helping security experts detect and respond to threats more effectively. AI-driven tools can automate identifying vulnerabilities, predict potential attack vectors, and suggest remediation strategies. ... With the proliferation of APIs (Application Programming Interfaces) in modern applications, API security testing has become a critical focus area. APIs facilitate seamless communication between different software systems but can also be vulnerable points if not properly secured. 


Kenya & US Aim to Bolster Digital Security in Africa

The news comes as Kenya has seen a spike in attacks, including significantly disruptive incidents. For instance, the country suffered a massive denial-of-service attack that disrupted access to its e-Citizen government-services site last year, nd eventually affected electric utilities and rail-ticketing systems. ... "The government ought to adopt good multi-stakeholder practices such as courting local private sector players—especially small and medium-sized enterprises operating in and affected by developments in cyberspace—alongside the local sector leaders and tech multinationals operating in the country," the group stated. "Kenya also has a vibrant information security community that should be incorporated in cyber drills through professional associations." ... Both Kenya and the United States highlighted the efforts of private industry in partnering with the East African nation to improve its cybersecurity posture. In addition to its cyber operations work, Google will be aiding Kenya with incident-response solutions and improving infrastructure resilience.


How to Find the Right AI Solution: 3 Innovative Techniques

The real issue is that AI, as a product or service, doesn’t fit well into the RFP process. First, AI isn’t akin to a magic wand. It works slowly and deliberately, producing incremental -- but very real and significant -- improvements over time. These gains are hard to explain using an RFP, which, again, demands results that are achievable rapidly and according to a strict timeline. Instilling confidence -- without conveying false hopes or unrealistic expectations -- is difficult given the sheer volume of specific and detailed questions that RFPs require. ... For AI to do what it does best, it requires access to every bit of available data over an extended period of time. Limiting AI to a very brief timeline, which includes piecemeal and/or partial access to data, yields results that are effectively useless. A POC, in short, gives no indication of what the technology could do if these restrictions didn’t exist, which makes it hard for vendors of all sizes to use proofs of concept to bolster their submissions -- and even harder for organizations to trust what the POC is claiming.


Advanced CI/CD: 6 steps to better CI/CD pipelines

One surprising data point in the State of CI/CD report was the number of CI/CD platforms respondents had in place and how it impacted DORA metrics. Companies using a hybrid approach of self-hosted and managed CI/CD platforms outperformed those who standardized on one approach or were not using CI/CD platforms. Of the companies using a hybrid approach, 49% had a lead time of less than one week for changes, and 24% had a lead time of less than one day. Sixty-six percent could typically restore service performance from an unplanned outage in under a day, and 25% could do so in under an hour. These rates were significantly better than those using only one approach. The report also showed that organizations using three or fewer CI/CD platforms generally outperformed those with more than three tools. There are many reasons why organizations may have multiple CI/CD platforms. For example, a company may use Copado or Opsera to deploy apps to Salesforce, use Jenkins for data center apps, GitHub Actions for cloud-native applications, and then inherit implementations using AWS CodeBuild and AWS CodePipeline after acquiring a business. 


Key Considerations for C-Suite Leaders Involved in Digital Transformation Initiatives

Poor data can lead to poor decisions, especially with AI-based technologies where foundational models rely solely on data without context. While making good decisions relies on complete and accurate data, using incorrect data can lead to significant financial losses. ... Before embarking on a transformation, leadership needs to understand the regulatory environment specific to their industry. This requires looking at current regulations and understanding potential changes happening in the relatively short term. ... Before and during a transformation, C-suite leaders must keep their finger on the pulse of cybersecurity. Newer technologies are aggregating large datasets of customer, banking, and personally identifiable information (PII), which demands a premium and can be extremely valuable on the dark web. Implementing an innovative technology is a perfect time to ensure adequate cybersecurity measures, and post-implementation testing of new integrations will provide additional peace of mind. Protecting digital assets is not only a technical challenge; it’s a human challenge. 



Quote for the day:

"Earn your leadership every day." -- Michael Jordan

Daily Tech Digest - June 02, 2024

Can the sovereign cloud become Oracle’s crowning glory?

Organisations in highly regulated industries, like the banking sector, are also very interested in using sovereign clouds. They’ve already invested a huge amount into their data centres, and they like the idea of perhaps running Oracle Cloud Services alongside that. And they’ve got legacy systems to consider, too. Look at Deutsche Bank. They continue to run a lot of their applications in a standard way, but they’ve modernised their Oracle database estate by using our Oracle Exadata Cloud@Customer offering. ... AI will be another complicating factor. There’s a real desire among customers to make use of AI technologies, but there’s a real nervousness about making sure that any model is properly trained on the data contained within the company and not unduly exposed to training material scraped from across the internet. That’s why we recently announced a partnership with Nvidia. We’re not only harnessing its GPUs within our network but doing so in a way that ensures that they’re operated in a sovereign context. That really is an area that we’re ploughing ahead with because we just think there’s a lot of demand for such an approach.


An AI tool for predicting protein shapes could be transformative for medicine

Proteins are essential parts of living organisms and take part in virtually every process in cells. But their shapes are often complex, and they are difficult to visualise. So being able to predict their 3D structures offers windows into the processes inside living things, including humans. This provides new opportunities for creating drugs to treat disease. This in turn opens up new possibilities in what is called molecular medicine. This is where scientists strive to identify the causes of disease at the molecular scale and also develop treatments to correct them at the molecular level. The first version of DeepMind’s AI tool was unveiled in 2018. The latest iteration, released this year, is AlphaFold3. A worldwide competition to evaluate new ways of predicting the structures of proteins, the Critical Assessment of Structure Prediction (Casp) has been held biannually since 1994 In 2020, the Casp competition got to test AlphaFold2 and was very impressed. Since then, researchers eagerly anticipate each new incarnation of the algorithm.


AI training data has a price tag that only Big Tech can afford

“Overall, entities governing content that’s potentially useful for AI development are incentivized to lock up their materials,” Lo said. “And as access to data closes up, we’re basically blessing a few early movers on data acquisition and pulling up the ladder so nobody else can get access to data to catch up.” Indeed, where the race to scoop up more training data hasn’t led to unethical (and perhaps even illegal) behavior like secretly aggregating copyrighted content, it has rewarded tech giants with deep pockets to spend on data licensing. Generative AI models such as OpenAI’s are trained mostly on images, text, audio, videos and other data — some copyrighted — sourced from public web pages. ... OpenAI has spent hundreds of millions of dollars licensing content from news publishers, stock media libraries and more to train its AI models — a budget far beyond that of most academic research groups, nonprofits and startups. Meta has gone so far as to weigh acquiring the publisher Simon & Schuster for the rights to e-book excerpts


Snowflake compromised? Attackers exploit stolen credentials

“Information about the incident and the group’s tactics is not yet fully published, but from what we know, the group utilizes custom tools to find Snowflake instances and employs credential stuffing techniques to gain unauthorized access. Once access is obtained, they leverage built-in Snowflake features to exfiltrate data to external locations, possibly using cloud storage services.” Brad Jones, VP of Information Security and CISO at Snowflake, says that they became aware of potentially unauthorized access to certain customer accounts on May 23, 2024. “During our investigation, we observed increased threat activity beginning mid-April 2024 from a subset of IP addresses and suspicious clients we believe are related to unauthorized access,” he added. “Research indicates that these types of attacks are performed with our customers’ user credentials that were exposed through unrelated cyber threat activity. To date, we do not believe this activity is caused by any vulnerability, misconfiguration, or malicious activity within the Snowflake product.”


GoFr: A Go Framework To Power Scalable and Observable Apps

When an application encounters such an error (typically due to temporary network glitches or database timeouts), instead of immediately giving up and returning an error to the user, the retry pattern involves automatically retrying the operation after a short delay. This delay can be fixed or exponential, meaning that subsequent retries occur after increasing intervals. But sometimes relentless retries exacerbate the problem, leading to potential service degradation and even unintentional denial-of-service attacks. To address this challenge, GoFr integrates the circuit breaker pattern, a robust defense mechanism designed to prevent futile operations and mitigate the impact of non-transient faults. The circuit breaker pattern complements the retry pattern by focusing on recognizing and handling scenarios where repeated attempts at an operation are unlikely to succeed. Rather than persistently retrying, the circuit breaker pattern aims to safeguard the system by temporarily halting further attempts upon detecting a certain threshold of failures. 


Digital transformation: AI — Executive Insights

The public’s understanding of AI has gained ground since ChatGPT’s 2022 launch, which took a staggeringly short five days to reach 1 million daily users. But Bhasin says the term “artificial intelligence” doesn’t necessarily capture the technology’s true value. “How we think about AI is less about being artificial intelligence and more about being augmented intelligence,” Bhasin says. Humans generally listen well and empathize. Computers are good at doing repetitive things again and again. AI’s value is in augmenting the work that a caring human can provide. “It’s mixing the power of the human being and the augmented intelligence, and how it comes together to serve a client’s needs and serve a business need,” Bhasin says. “That’s a great way to think about how to deploy these technologies.” Bhasin pointed out that Bank of America’s Erica AI tool was released more than five years ago. The tool was built in-house. “We’ve been doing this for a long time,” Bhasin says. “We know how to do this, and we know how to do it at scale.”


The CFO Renaissance: what the rebirth of the role means for businesses

Modern CFOs are also expected to be proficient with emerging technologies such as AI, machine learning and blockchain, which all help to automate routine financial tasks, enhance accuracy and enable more sophisticated financial modelling. It’s a full plate but Pleo’s ambition is to ensure that CFOs have the means to execute their new responsibilities effectively and are finally able step out of the back office to occupy a key role in strategic decision making. Today, Pleo is Europe’s leading spend-management solution, enabling 33,000 companies across Europe to run their finances efficiently and in doing so, promote business success without compromising on control, transparency or financial safety. With its forward-thinking solutions, Moylan says Pleo can play an important role in “enabling CFOs to add value in other areas. ...” Integrating solutions like Pleo across an organisation can have compounding benefits, believes Moylan, including helping to connect critical areas and “ensure the accounting system talks to the payroll system, the expense management system and the tax authority – all of which is critical to effective decision making”.


Security-as-Code: A Key Building Block for DevSecOps

Security-as-Code is a foundational building block of DevSecOps. SaC provides the automation, consistency and reliability of ensuring security in the DevSecOps ecosystem. It treats every security measure as code artifacts that are version-controlled, tested and deployed alongside the actual software. ... SaC allows security controls and checks to be integrated into the development pipeline, enabling early detection of security vulnerabilities and issues. By identifying and addressing security issues during the development process, organizations can reduce the likelihood of security breaches and minimize the associated risks. ... SaC promotes consistency and standardization in security configurations and practices across development, testing and production environments. By defining security measures as code artifacts, organizations can ensure that security policies are uniformly applied and enforced throughout the software development lifecycle. ... SaC automates security processes, such as vulnerability scanning, compliance checks, and configuration management, leading to increased agility and efficiency.


Robotics Reshaping Manufacturing and the Future of Work

The growing adoption of industrial robots is driven by a range of factors. Advances in sensors, computing power and AI are making robots more capable, flexible, and user-friendly. Labour shortages and rising wage costs in many countries are also spurring companies to automate more tasks, while the COVID-19 pandemic highlighted the resilience and efficiency benefits of robotic systems, accelerating automation plans in numerous industries. Recent advances in deep learning algorithms have also allowed robots to perform more complex tasks, with increasing numbers of industry leaders now predicting that the robotics industry is set to dramatically accelerate. “We have many partners developing applications using AI to allow our robots to perform more complex and diverse functions,” comments Anders Billesø Beck, Vice President of Strategy and Innovation at Universal Robots. “For example, AI allows robots to have human-like perception, handle variation, move parts precisely, adapt to changing environments, and learn from their own experience.


How Software Architecture Choices Impact Application Scalability, Resiliency and Engineering Velocity

As organizations grapple with how to tackle ATD and balance the trade-offs between architectures, the pivotal role of software architects becomes evident. However, the survey reveals a disconnect between architects, who are responsible for the long-term integrity of system architecture, and the modern DevOps processes that drive iterative software delivery. While C-suite leaders rank the enterprise architect as primarily responsible for addressing ATD within their organizations, engineering teams placed architects much lower on that list, below directors and engineering leadership. This fundamental lack of clarity around roles and responsibilities highlights the complexity of the issue within enterprises. ... To confront the mounting ATD crisis, organizations are turning to architectural observability. After being presented with a definition of architectural observability as "the ability to analyze applications statically and dynamically to understand their architecture, detect drift, and find/fix architectural debt", an overwhelming 80% of respondents acknowledged that having these capabilities would be extremely or very valuable within their organizations.



Quote for the day:

"The art of leadership is saying no, not yes. It is very easy to say yes." -- Tony Blair

Daily Tech Digest - June 01, 2024

AI Governance: Is There Too Much Focus on Data Leakage?

While data leakage is an issue it’s by no means the only one. GenAI stands apart due to its autonomous nature and its unique ability to create new content from the information it is exposed to, and this introduces a whole host of new problems. Data poisoning, for instance, sees a malicious actor intentionally compromise the data feed of the AI to skew results. This might involve seeding an LLM with examples of deliberately vulnerable code resulting in issues being adopted in new code. Without proper checks and balances in place, this could result in the poisoned data being pulled into organisational codebases via requests from developers. The code could then end up in production application and services which would be vulnerable to a zero-day attack. AI hallucinations, sometimes referred to as confabulations, are another issue. Unlike poisoning, this is the result of the AI’s autonomy which can see it make incorrect deductions based on the data its presented with. GenAI can and does make mistakes, and there are numerous notable examples here too. 


12 Key AI Patterns for Improving Data Quality (DQ)

While there are many solutions and options to improve data quality, AI is a very viable option. AI can significantly enhance data quality in several ways. Here are 12 key use cases or patterns from four categories where AI can help in improving the data quality in business enterprises. ... Firstly, as LLMs such as ChatGPT and Gemini are trained on enormous amounts of public data, it is nearly impossible to validate the accuracy of this massive data set. This often results in hallucinations or factually incorrect responses. No business enterprise would like to be associated with a solution that has even a small probability of giving an incorrect response. Secondly, data today is a valuable business asset for every enterprise. Stringent regulations such as GDPR, HIPAA, and CCPA are forcing companies to protect personal data. Breaches can lead to severe financial penalties and damage to the company’s reputation and brand. Overall, organizations want to protect their data by keeping it private and not sharing it with everyone on the internet. Below are some examples of hallucinations from popular AI platforms.


Experts Warn of Security Risks in Grid Modernization

Experts recommend requiring comprehensive security assessments on all GETs and modern grid components. They say malicious actors and foreign adversaries already possess unauthorized access to many critical infrastructure sectors. The Cybersecurity and Infrastructure Security Agency has steadily released a series of alerts in recent months warning of a Chinese state-sponsored hacking group known as Volt Typhoon. The group is aiming to pre-position itself using "living off the land" techniques on information technology networks "for disruptive or destructive cyber activity against U.S. critical infrastructure in the event of a major crisis or conflict with the United States," according to CISA. "The Volt Typhoon alerts have said the quiet part out loud," said Padraic O'Reilly, chief innovation officer for the risk management platform CyberSaint Security. "The [threat] is in the networks, so new infrastructure must not allow for lateral movement on OT assets." Biden's federal-state grid modernization plan emphasizes the need to "speed up adoption and deployment" of GETs. 


Corporations looking at gen AI as a productivity tool are making a mistake

Taking the time to focus on the bigger picture will set up organizations for more success in the future, Menon said. AI is transformational and requires a comprehensive reevaluation of current business processes, data strategies, technology platforms, and people strategies, Pallath said. “Implementing AI effectively necessitates simplifying and revamping business processes with an AI-first mindset,” Pallath said. “Effective change management and governance are crucial to ensure that the entire organization is prepared for and engaged in this transformation.” What often happens, he said, is that employees worry more about AI’s impact on their jobs, rather than how they can leverage the technology to help them work smarter, thereby hindering the necessary changes in process to make AI successful. Executive leadership and sponsorship are also critical. “AI initiatives need strong leadership support to overcome inertia and gain the necessary resources,” Pallath said. “Without a clear vision from the top, AI projects are more likely to get stalled or diluted.” A dedicated AI team headed by a chief AI officer can help ensure success. 


Why HTML Actions Are Suddenly a JavaScript Trend

Actions in React look a lot like HTML actions, but they also look similar to event handlers like onsubmit, or unclick, Clark said. “Despite the surface-level similarities, though, actions have some important abilities that set them apart from regular event handlers,” he continued. “One such ability is support for progressive enhancement. Form actions in React are interactive before hydration occurs. Believe it or not, this works with all actions, not just actions defined on the server.” If the user interacts with a client action before it is finished hydrating, React will cue the action and replay as soon as it streams it, he said. If the user interacts with a server action, action can immediately trigger a regular browser navigation, without hydration or JavaScript. Actions also can handle asynchronous logic, he said. “React actions have built-in support for UX patterns like optimistic UI and error handling,” he said. “Actions make these complex UX patterns super simple by deeply integrating with React features like suspense and transitions.


Indonesia to Create 'Super Apps' to Run Government Services

The government has entrusted state-owned technology company Perum Peruri, commonly known as Peruri, with developing the new applications, digitizing government services and implementing the government's Electronic-Based Government System, which will run modernized applications and digital portals. ... The company said its rich history of developing high-security solutions makes it the ideal choice to lead the government's digital transformation program. "Peruri presents a fresh visual identity that illustrates how we are able to produce quality services to maintain the authenticity of products, identities and complex digital systems," said President and Director Dwina Septiani Wijaya. "The transformation process we are undergoing does not only focus on business and infrastructure, but we also understand the importance of quality human resources. ... The government's planned integration of government applications could make it easier for IT security teams to manage far fewer applications than before, but could also make the new super applications prime targets for hacking attacks considering the amount of public data they would process.


Within two years, 90% of organizations will suffer a critical tech skills shortage

Among the challenges organizations face when trying to expand the skills of their employees is resistance to training. Employees complain that the courses are too long, the options for learning are too limited, and there isn’t enough alignment between skills and career goals, according to IDC’s survey. ... IT leaders need to employ a variety of strategies to encourage a more effective learning environment within their organization. That includes everything from classroom training to hackathons, hand-on labs, and games, quests, and mini-badges. But fostering a positive learning environment in an organization requires more than just materials, courses, and challenges. Culture change begins at the top, and leaders need to demonstrate why learning matters to the organization. “This can be done by aligning employee goals with business goals, promoting continuous learning throughout the employee’s journey, and creating a rewards program that recognizes process as well as performance,” IDC’s report stated. “It also requires the allocation of adequate time, money, and people resources.”


RIG Model - The Puzzle of Designing Guaranteed Data-Consistent Microservice Systems

The RIG model sets the foundation for the saga design. It is founded in the CAP theorem and the work of Bromose and Laursen. The theoretical work results in a set of microservice categories and rules that the sagaS must comply with if we are to guarantee data consistency. The RIG model divides microservices behavior within a saga into three categories:Guaranteed microservices: Local transactions will always be successful. No business constraints will invalidate the transaction. Reversible microservices: Local transactions can always be undone and successfully rolled back with the help of compensating transactions. Irreversible microservices: Local transactions cannot be undone. ... A reversible microservice must include support for a compensating transaction and be able to handle an incoming "cancel transaction" message. When receiving a "cancel transaction" request, the microservice must "roll back" to the state before the saga. Handling compensating transactions in a reversible microservice must behave as a "Guaranteed" service. 


3 reasons users can’t stop making security mistakes — unless you address them

People are naturally inclined to find the fastest possible route at work, and that often translates into taking shortcuts that compromise security for the sake of convenience. Even tech employees are not immune when, for example, importing libraries from public repositories assuming these are safe, as they continue to be used to distribute malware and steal passwords. To avoid these shortcuts that can threaten systems, CISOs can put automated MFA prompts in place to avoid risks due to compromised passwords and restrict access to services that could put data at risk, including generative AI or downloadable libraries of code. ... Users should use out-of-band communication for verification to deter attacks and scams. Contacting those businesses through a phone number or email previously established as legitimate is a good way to ascertain whether or not the message is authorized by the entity it claims. While CISOs can’t eliminate all human risk, they can significantly reduce incidents and promote a cyber-aware culture with a strategy that addresses the psychological drivers behind poor decisions.


Elevating Defense Precision With AI-Powered Threat Triage in Proactive Dynamic Security

AI-powered threat triage operates on the principle of predictive analytics, leveraging machine learning algorithms to sift through massive datasets and identify patterns indicative of potential security threats. By continuously analyzing historical data and monitoring network activity, AI systems can detect subtle anomalies and deviations from normal behavior that may signify an impending attack. Moreover, AI algorithms can adapt and learn from new data, enabling them to evolve and improve their threat detection capabilities over time. In the perpetual battle against an ever-expanding array of cyber threats, organizations are increasingly turning to innovative technologies to bolster their defenses and stay ahead of potential attacks. ... At the forefront of this technological revolution is the integration of Artificial Intelligence (AI) into threat triage processes, and the intricate dynamics of advanced algorithms and machine learning capabilities ushering in a new era of proactive defenses that explores the transformation of traditional cybersecurity strategies.



Quote for the day:

"A leadership disposition guides you to take the path of most resistance and turn it into the path of least resistance." -- Dov Seidman