Daily Tech Digest - September 29, 2023

Why root causes matter in cybersecurity

In the cybersecurity industry, unfortunately, there is no official directory of root causes. Many vendors categorize certain attacks as root causes when in reality, these are often outcomes or symptoms. For example, ransomware, remote access, stolen credentials, etc., are all symptoms, not root causes. The root cause behind remote access or stolen credentials is most likely human error or some vulnerability. ... The true root cause is human error. People are prone to mistakes, ignorance, and biases. We open malicious attachments, click on wrong links, surf the wrong websites, use weak credentials, and reuse passwords across multiple sites. We use unauthorized software, make public our private details via posting on social media for bad actors to scrape and harvest. We take security far too much for granted. Human error in cybersecurity is a much larger problem than previously anticipated or documented. To clamp down on human error, organizations must train employees enough so they can develop a security instinct and improve security habits. Clear policies and procedures must be in place, so everyone understands their responsibility and accountability towards the business.


Running Automation Tests at Scale Using Java

As customer decision making is now highly dependent on digital experience as well, organisations are increasingly investing in quality of that digital experience. That means establishing high internal QA standards and most importantly investing in Automation Testing for faster release cycles. So, how does this concern you as a developer or tester? Having automation skills on your resume is highly desirable in the current employment market. Additionally, getting started is quick. Selenium is the ideal framework for beginning automation testing. It is the most popular automated testing framework and supports all programming languages. This post will discuss Selenium, how to set it up, and how to use Java to create an automated test script. Next, we will see how to use a java based testing framework like TestNG with Selenium and perform parallel test execution at scale on the cloud using LambdaTest.


How Generative AI Can Support DevOps and SRE Workflows

Querying a bunch of different tools for logs and a bunch of different observability data and outputs manually requires a lot of time and knowledge, which isn’t necessarily efficient. Where is that metric? Which dashboard is in? What’s the machine name? How do other people typically refer to it? What kind of time window do people typically look at here? And so forth. “All that context has been done before by other people,” Nag said. And generative AI can enable engineers to use natural language prompts to find exactly what they need — and often kick off the next steps in subsequent actions or workflows automatically as well, often without ever leaving Slack ... The cloud native ecosystem is vast (and continually growing) — keeping up with the intricacies of everything is almost impossible. With generative AI, Nag said, no one actually needs to know the ins and outs of dozens of different systems and tools. A user can simply say “scale up this pod by two replicas or configure this Lambda [function] this way. 


Diverse threat intelligence key to cyberdefense against nation-state attacks

Most threat intelligence houses currently originate from the West or are Western-oriented, and this can result in bias or skewed representations of the threat landscape, noted Minhan Lim, head of research and development at Ensign Labs. The Singapore-based cybersecurity vendor was formed through a joint venture between local telco StarHub and state-owned investment firm, Temasek Holdings. "We need to maintain neutrality, so we're careful about where we draw our data feeds," Lim said in an interview with ZDNET. "We have data feeds from all reputable [threat intel] data sources, which is important so we can understand what's happening on a global level." Ensign also runs its own telemetry and SOCs (security operations centers), including in Malaysia and Hong Kong, collecting data from sensors deployed worldwide. Lim added that the vendor's clientele comprises multinational corporations (MNCs), including regional and China-based companies, that have offices in the U.S., Europe, and South Africa.


Where Does Zero Trust Fall Short? Experts Weigh In

The strategy of ZT can be applied to all of those areas and, if done correctly and intelligently, then a solid strategic approach can be beneficial. There is no ZT product that can simply make those areas secure, however. I would also suggest that the largest area of threat is privileged access, as that is the most common avenue of lateral movement and increased compromise historically.” ... “It’s a multifaceted issue when determining the greatest threat among the areas where zero trust falls short. At the core, privileged access stands out as the most alarming vulnerability. These users, often likened to having ‘keys to the kingdom,’ possess the capabilities to access confidential data, modify configurations and undertake actions that could severely jeopardize an organization. “However, an underlying concern that might be overlooked is the reason behind the extensive distribution of privileged access. In many situations, this excessive access stems from challenges tied to legacy systems, IoT devices, third-party services, and emerging technologies and applications. 


Data Management Challenges In Heterogeneous Systems

When you look at the whole chiplet ecosystem, there are certain blocks we feel can be generalized and made into chiplets, or known good die, that can be brought into the market. The secret sauce is custom piece of silicon, and they can design and own the recipe around that. But there are generic components in any SoC — memory, interconnects, processors. You can always fragment it in a way that there are some general components, which you can leverage from the general market, and which will help everyone. That brings the cost of building your system down so you can focus on problems around your secret sauce. ... We need something like a three-tier data management system, where with tier one everyone can access data and share it, and tier three is only for people in a company. But I don’t know when we’ll get there because data management is a real tough problem. ... We may need new approaches. Just looking at this from the hyperscale cloud perspective, which is huge, with complex hardware/software systems and things coming in from many vendors, how do we protect it? 


Companies are already feeling the pressure from upcoming US SEC cyber rules

Calculating the financial ramifications of a cybersecurity incident under the upcoming rules placed pressure on corporate leaders to collaborate more closely with CISOs and other cybersecurity professionals within their organizations. Right now, a "gulf exists between boards and CFOs and their cybersecurity defense teams, their chief information security officers," Gerber says. "The two aren’t speaking the same language yet." Gerber thinks that "what companies and CFOs are realizing is that they need to get their teams into these exercises so that they can practice making their determinations as accurately and clearly as they can and early as they can." "I think that the general counsels and the CISOs have been at arm’s length of each, and I’m going to tell you one extreme," Sanna says. "One CISO told us that their legal or general counsel did not want them to assess cyber risk in financial terms so they could claim ignorance and not have to report it."


A Guide to Data-Driven Design and Architecture

Data-driven architecture involves designing and organizing systems, applications, and infrastructure with a central focus on data as a core element. Within this architectural framework, decisions concerning system design, scalability, processes, and interactions are guided by insights and requirements derived from data. Fundamental principles of data-driven architecture include: Data-centric design – Data is at the core of design decisions, influencing how components interact, how data is processed, and how insights are extracted. Real-time processing – Data-driven architectures often involve real-time or near real-time data processing to enable quick insights and actions. Integration of AI and ML – The architecture may incorporate AI and ML components to extract deeper insights from data. Event-driven approach – Event-driven architecture, where components communicate through events, is often used to manage data flows and interactions.


The Search for Certainty When Spotting Cyberattacks

Exacerbating the problem is the availability of malware and ransomware services for sale on the Dark Web, Taylor said, which can arm bad actors with the means of doing digital harm even if they lack coding skills of their own. That makes it harder to profile and identify specific attackers, he said, because thousands of bad actors might buy the same tools to attack systems. “We can’t identify where it’s coming from very easily,” Taylor said, because almost anybody could be a hacker. “You don’t have to be the expert anymore. You don’t have to be the cyber gang that’s very technically adept at developing all these tools.” That means cyberattacks may be launched from unexpected angles. For example, he said, gangs could outsource their hacking needs via such resources, or individuals who are simply bored at home might pick up such tools from the Dark Web to create phishing campaigns. “It becomes harder and harder to profile the threat.”


How Listening to the Customer Can Boost Innovation

Product development should not rely solely on customer input. Development teams should also take product metrics into account. Most, if not all, SaaS products today track a wealth of product metrics that show how customers use and engage with products. These insights can drive product development and strategy. For example, by providing insights on how individual customers are interacting with products, development teams can see what features customers are and aren’t using, or perhaps struggling with. This can validate whether customer requests to improve certain features are correct. Metrics can also show whether new products or services are performing well and having a positive impact on business outcomes. From a business perspective, you want new services to improve engagement, retention and sentiment, and metrics can show the benefits of listening to the customer by demonstrating how new services are helping to improve revenue growth.



Quote for the day:

"Become the kind of leader that people would follow voluntarily, even if you had no title or position." --Brian Tracy

Daily Tech Digest - September 28, 2023

What is artificial general intelligence really about?

AGI is a hypothetical intelligent agent that can accomplish the same intellectual achievements humans can. It could reason, strategize, plan, use judgment and common sense, and respond to and detect hazards or dangers. This type of artificial intelligence is much more capable than the AI that powers the cameras in our smartphones, drives autonomous vehicles, or completes the complex tasks we see performed by ChatGPT. ... AGI could change our world, advance our society, and solve many of the complex problems humanity faces, to which a solution is far beyond humans' reach. It could even identify problems humans don't even know exist. "If implemented with a view to our greatest challenges, [AGI] can bring pivotal advances in healthcare, improvements to how we address climate change, and developments in education," says Chris Lloyd-Jones, head of open innovation at Avande. ... AGI carries considerable risks, and experts have warned that advancements in AI could cause significant disruptions to humankind. But expert opinions vary on quantifying the risks AGI could pose to society.


How to avoid the 4 main pitfalls of cloud identity management

DevOps and Security teams are often at odds with each other. DevOps wants to ship applications and software as fast and efficiently as possible, while Security’s goal is to slow the process down and make sure bad actors don’t get in. At the end of the day, both sides are right – fast development is useless if it creates misconfigurations or vulnerabilities and security is ineffective if it’s shoved toward the end of the process. Historically, deploying and managing IT infrastructure was a manual process. This setup could take hours or days to configure, and required coordination across multiple teams. (And time is money!) Infrastructure as code (IaC) changes all of that and enables developers to simply write code to deploy the necessary infrastructure. This is music to DevOps ears, but creates additional challenges for security teams. IaC puts infrastructure in the hands of developers, which is great for speed but introduces some potential risks. To remedy this, organizations need to be able to find and fix misconfigurations in IaC to automate testing and policy management.


Why a DevOps approach is crucial to securing containers and Kubernetes

DevOps, which is heavily focused on automation, has significantly accelerated development and delivery processes, making the production cycle lightning fast, leaving traditional security methods lagging behind, Carpenter says. “From a security perspective, the only way we get ahead of that is if we become part of that process,” he says. “Instead of checking everything at the point it’s deployed or after deployment, applying our policies, looking for problems, we embed that into the delivery pipeline and start checking security policy in an automated fashion at the time somebody writes source code, or the time they build a container image or ship that container image, in the same way developers today are very used to, in their pipelines.” It’s “shift left security,” or taking security policies and automating them in the pipeline to unearth problems before they get to production. It has the advantage of speeding up security testing and enables security teams to keep up with the efficient DevOps teams. “The more things we can fix early, the less we have to worry about in production and the more we can find new, emerging issues, more important issues, and we can deal with higher order problems inside the security team,” he says.


Understanding Europe's Cyber Resilience Act and What It Means for You

The act is broader than a typical IoT security standard because it also applies to software that is not embedded. That is to say, it applies to the software you might use on your desktop to interact with your IoT device, rather than just applying to the software on the device itself. Since non-embedded software is where many vulnerabilities take place, this is an important change. A second important change is the requirement for five years of security updates and vulnerability reporting. Few consumers who buy an IoT device expect regular software updates and security patches for that type of time range, but both will be a requirement under the CRA. The third important point of the standard is the requirement for some sort of reporting and alerting system for vulnerabilities so that consumers can report vulnerabilities, see the status of security and software updates for devices, and be warned of any risks. The CRA also requires that manufacturers notify the European Union Agency for Cybersecurity (ENISA) of a vulnerability within 24 hours of discovery. 


Conveying The AI Revolution To The Board: The Role Of The CIO In The Era Of Generative AI

Narratives can be powerful, especially when they’re rooted in reality. By curating a list of businesses that have thrived with or invested in AI—especially those within your sector—and bringing forth their successful integration case studies, you can demonstrate not just possibilities but proven success. It conveys a simple message: If they can, so can we. ... Change, especially one as foundational as AI, can be daunting. Set up a task force to outline the stages of AI implementation, starting with pilot projects. A clear, step-by-step road map demystifies the journey from our current state to an AI-integrated future. It offers a sense of direction by detailing resource allocations, potential milestones and timelines—transforming the AI proposition from a vague idea into a concrete plan. ... In our zeal to champion AI, we mustn’t overlook the ethical considerations it brings. Draft an AI ethics charter, highlighting principles and practices to ensure responsible AI adoption. Addressing issues like data privacy, bias mitigation and the need for transparent algorithms proactively showcases a balanced, responsible approach.


Chip industry strains to meet AI-fueled demands — will smaller LLMs help?

Avivah Litan, a distinguished vice president analyst at research firm Gartner, said sooner or later the scaling of GPU chips will fail to keep up with growth in AI model sizes. “So, continuing to make models bigger and bigger is not a viable option,” she said. iDEAL Semiconductor's Burns agreed, saying, "There will be a need to develop more efficient LLMs and AI solutions, but additional GPU production is an unavoidable part of this equation." "We must also focus on energy needs," he said. "There is a need to keep up in terms of both hardware and data center energy demand. Training an LLM can represent a significant carbon footprint. So we need to see improvements in GPU production, but also in the memory and power semiconductors that must be used to design the AI server that utilizes the GPU." Earlier this month, the world’s largest chipmaker, TSMC, admitted it's facing manufacturing constraints and limited availability of GPUs for AI and HPC applications. 


NoSQL Data Modeling Mistakes that Ruin Performance

Getting your data modeling wrong is one of the easiest ways to ruin your performance. And it’s especially easy to screw this up when you’re working with NoSQL, which (ironically) tends to be used for the most performance-sensitive workloads. NoSQL data modeling might initially appear quite simple: just model your data to suit your application’s access patterns. But in practice, that’s much easier said than done. Fixing data modeling is no fun, but it’s often a necessary evil. If your data modeling is fundamentally inefficient, your performance will suffer once you scale to some tipping point that varies based on your specific workload and deployment. Even if you adopt the fastest database on the most powerful infrastructure, you won’t be able to tap its full potential unless you get your data modeling right. ... How do you address large partitions via data modeling? Basically, it’s time to rethink your primary key. The primary key determines how your data will be distributed across the cluster, which improves performance as well as resource utilization.


AI and customer care: balancing automation and agent performance

AI alone brings real challenges to delivering outstanding customer service and satisfaction. For starters, this technology must be perfect, or it can lead to misunderstandings and errors that frustrate customers. It also lacks the humanised context of empathy and understanding of every customer’s individual and unique needs. A concern we see repeatedly is whether AI will eventually replace human engagement in customer service. Despite the recent advancements in AI technology, I think we can agree it remains increasingly unlikely. Complex issues that arise daily with customers still require human assistance. While AI’s strength lies in dealing with low-touch tasks and making agents more effective and productive, at this point, more nuanced issues still demand the human touch. However, the expectation from AI shouldn’t be to replace humans. Instead, the focus should be on how AI can streamline access to live-agent support and enhance the end-to-end customer care process. 


How to Handle the 3 Most Time-Consuming Data Management Activities

In the context of data replication or migration, data integrity can be compromised, resulting in inconsistencies or discrepancies between the source and target systems. This issue is identified as the second most common challenge faced by data producers, identified by 40% of organizations, according to The State of DataOps report. Replication processes generate redundant copies of data, while migration efforts may inadvertently leave extraneous data in the source system. Consequently, this situation can lead to uncertainty regarding which data version to rely upon and can result in wasteful consumption of storage resources. ... Another factor affecting data availability is the use of multiple cloud service providers and software vendors. Each offers proprietary tools and services for data storage and processing. Organizations that heavily invest in one platform may find it challenging to switch to an alternative due to compatibility issues. Transitioning away from an ecosystem can incur substantial costs and effort for data migration, application reconfiguration, and staff retraining.


The Secret of Protecting Society Against AI: More AI?

One of the areas of greatest concern with generative AI tools is the ease with which deepfakes -- images or recordings that have been convincingly altered and manipulated to misrepresent someone -- can be generated. Whether it is highly personalized emails or texts, audio generated to match the style, pitch, cadence, and appearance of actual employees, or even video crafted to appear indistinguishable from the real thing, phishing is taking on a new face. To combat this, tools, technologies, and processes must evolve to create verifications and validations to ensure that the parties on both ends of a conversation are trusted and validated. One of the methods of creating content with AI is using generative adversarial networks (GAN). With this methodology, two processes -- one called the generator and the other called the discriminator -- work together to generate output that is almost indistinguishable from the real thing. During training and generation, the tools go back and forth between the generator creating output and the discriminator trying to guess whether it is real or synthetic. 



Quote for the day:

''You are the only one who can use your ability. It is an awesome responsibility.'' -- Zig Ziglar

Daily Tech Digest - September 27, 2023

CISOs are struggling to get cybersecurity budgets: Report

"Across industries, the decline in budget growth was most prominent in tech firms, which dropped from 30% to 5% growth YoY," IANS said in a report on the study. "More than a third of organizations froze or cut their cybersecurity budgets." Budget growth was the lowest in sectors that are relatively mature in cybersecurity, such as retail, tech, finance, and healthcare, added the report. ... Of the CISOs whose companies did increase cybersecurity budgets, 80% indicated extreme circumstances, such as a security incident or a major industry disruption, drove the budget increase. While companies impacted by a cybersecurity breach added 18% to their budget on average, other industry disruptions contributed to a 27% budget boost. "I think there has always been a component of security spending that is forced to be reactive: be it incidents, updated regulatory or vendor controls or shifting business priorities," Steffen said. "To some degree, technology spending in general has always been like this, and will always likely be this way."


Lifelong Machine Learning: Machines Teaching Other Machines

Lifelong learning is a relatively new field in machine learning, where AI agents are learning continually as they come across new tasks. The goal of LL is for agents to acquire new knowledge of novel tasks, without forgetting how to perform previous tasks. This approach is different from the typical “train-then-deploy” machine learning, where agents cannot learn progressively without “catastrophic interference” (also called catastrophic forgetting) happening in future tasks, where the AI abruptly and drastically forgets previously learned information upon learning new information. According to the team, their work represents a potentially new direction in the field of lifelong machine learning, as current work in LL involves getting a single AI agent to learn tasks one step at a time in a sequential way. In contrast, SKILL involves a multitude of AI agents all learning at the same time in a parallel way, thus significantly accelerating the learning process. The team’s findings demonstrate when SKILL is used, the amount of time that is required to learn all 102 tasks is reduced by a factor of 101.5 


Is Your Organization Vulnerable to Shadow AI?

Perhaps the biggest danger associated with unaddressed shadow AI is that sensitive enterprise data could fall into the wrong hands. This poses a significant risk to privacy and confidentiality, cautions Larry Kinkaid a consulting manager at BARR Advisory, a cybersecurity and compliance solutions provider. “The data could be used to train AI models that are commingled, or worse, public, giving bad actors access to sensitive information that could be used to compromise your company’s network or services.” There could also be serious financial repercussions if the data is subject to legal, statutory, or regulatory protections, he adds. Organizations dedicated to responsible AI deployment and use follow strong, explainable, ethical, and auditable practices, Zoldi says. “Together, such practices form the basis for a responsible AI governance framework.” Shadow AI occurs out of sight and beyond AI governance guardrails. When used to make decisions or impact business processes, it usually doesn’t meet even basic governance standards. “Such AI is ungoverned, which could make its use unethical, unstable, and unsafe, creating unknown risks,” he warns.


Been there, doing that: How corporate and investment banks are tackling gen AI

In new product development, banks are using gen AI to accelerate software delivery using so-called code assistants. These tools can help with code translation (for example, .NET to Java), and bug detection and repair. They can also improve legacy code, rewriting it to make it more readable and testable; they can also document the results. Plenty of financial institutions could benefit. Exchanges and information providers, payments companies, and hedge funds regularly release code; in our experience, these heavy users could cut time to market in half for many code releases. For many banks that have long been pondering an overhaul of their technology stack, the new speed and productivity afforded by gen AI means the economics have changed. Consider securities services, where low margins have meant that legacy technology has been more neglected than loved; now, tech stack upgrades could be in the cards. Even in critical domains such as clearing systems, gen AI could yield significant reductions in time and rework efforts.


Microsoft’s data centers are going nuclear

The software giant is already working with at least one third-party nuclear energy provider in an effort to reduce its carbon footprint. The ad, though, signals an effort to make nuclear energy an important part of its energy strategy. The posting said that the new nuclear expert “will maintain a clear and adaptable roadmap for the technology’s integration,” and have “experience in the energy industry and a deep understanding of nuclear technologies and regulatory affairs.” Microsoft has made no public statement on the specific goals of its nuclear energy program, but the obvious possibility — particularly in the wake of its third-party nuclear enegry deal — is a concern for environmental issues. Although nuclear power has long been plagued by serious concerns about its safety and role in nuclear weapons proliferation, the rapidly worsening climate situation makes it a comparatively attractive alternative to fossil fuels, given the relatively large amount of energy it that can be generated without producing atmospheric emissions.


The pitfalls of neglecting security ownership at the design stage

Without clear ownership of security during the design stage, many problems can quickly arise. Security should never be an afterthought, or a ‘bolted on’ mechanism after a product is created. Development teams primarily focus on creating functional and efficient software and hardware, whereas security teams specialize in identifying and mitigating potential risks. Without collaboration, or more ideally integration between the two, security may be overlooked or not adequately addressed, leaving a heightened risk for cyber vulnerabilities. A good example is a privacy shutter for cameras in laptop computers. Ever see a sticky note on someone’s PC covering the camera? A design team may focus on the quality and placement of the camera as primarily factors for the user experience. However, security professionals know that many users want a physical solution to guarantee cameras cannot capture images if they don’t want to, and on/off indicating lights are not good enough.


Enterprise Architecture Must Adapt for Continuous Business Change

Continuous business change is an agile enterprise mindset that begins with the realization that change is constant and that business needs to be organized to support this continual change. This change is delivered as a constant flow of activity directed by distributed teams and democratized processes. It is orchestrated by the transparency of information and includes automated monitoring and workflows. This continuous business change requires EA, as a discipline, to evolve to match the new mindset. Change processes need to be adapted and updated to deliver faster time to value and quicker iteration of business ideas. These adaptations require the democratization of design, away from a traditional centralized approach, to allow for a quicker and more efficient change process. These change processes recognize autonomous business areas that deliver their own change. One example of this is moving away from being project-focused to being product-focused. Product-based companies organize their teams around autonomous products which may also be known as value streams or bounded domains.


A history of online payment security

Google was the first site to use two-factor authentication. They made it so that those requesting access were required to have not only a password, but access to the phone number used when creating the account. Since then, many companies have taken this system to the next level by providing their users with a multitude of ways to ensure the security of their online payments. They have implemented multiple ways to ensure the safety of their clients’ transactions, including password security, a six digit PIN, account security tokens and SMS validation. Other than a DNA match, you can’t get much more verified than this. Privacy and confidentiality of information, especially when it concerns financial data, is detrimental to customer satisfaction. There are millions of financial transactions done online on a daily basis involving payments to online shopping websites or merchant stores, bill payments or bank transactions. Security of cashless transactions done on a virtual platform requires an element of bankability and trust that can only be generated from the best and most reputable brands and leaders in the industry.


Rediscovering the value of information

In the corporate sector, the value destroyed by poor information management practices is often measured in fines and lawsuit payouts. But before such catastrophes come to light, what metrics do we use — or should we use — to determine whether a publicly traded company has their information management house in order? Who manages information more effectively — P&G or Unilever; Coke or Pepsi; GM or Ford; McDonald’s or Chipotle; Marriot or Hilton? When interviewing a potential new hire, how should we ascertain whether they are a skilled and responsible information manager? Business historians tell us that it was about 10 years before the turn of the century that “information” — previously thought to be a universal “good thing” — started being perceived as a problem. About 20 years after the invention of the personal computer, the general population started to feel overwhelmed by the amount of information being generated. We thrive on information, we depend on information, and yet we can also choke on it. We have available to us more information than one person could ever hope to process.


Software Delivery Enablement, Not Developer Productivity

Software delivery enablement and 2023’s trend of platform engineering won’t succeed by focusing solely on people and technology. At most companies, processes need an overhaul too. A team has “either a domain that they’re working in or they have a piece of functionality that they have to deliver,” she said. “Are they working together to deliver that thing? And, if not, what do we have to do to improve that?” Developer enablement should be concentrated at the team outcome level, says Daugherty, which can be positively influenced by four key capabilities: Continuous integration and continuous delivery (CI/CD); Automation and Infrastructure as Code (IaC); Integrated testing and security; Immediate feedback. “Accelerate,” the iconic, metrics-centric guide to DevOps and scaling high-performing teams, has found certain decisions that are proven to help teams speed up delivery. One is that when teams are empowered to choose which tools they use, this is proven to improve performance. 



Quote for the day:

“Success is actually a short race - a sprint fueled by discipline just long enough for habit to kick in and take over.” -- Gary W. Keller

Daily Tech Digest - September 26, 2023

How to Future-Proof Your IT Organization

Effective future-proofing begins with strong leadership support and investments in essential technologies, such as the cloud and artificial intelligence (AI). Leaders should encourage an agile mindset across all business segments to improve processes and embrace potentially useful new technologies, says Bess Healy ... Important technology advancements frequently emerge from various expert ecosystems, utilizing the knowledge possessed by academic, entrepreneurial, and business startup organizations, Velasquez observes. “Successful IT leaders encourage team members to operate as active participants in these ecosystems, helping reveal where the business value really is while learning how new technology could play a role in their enterprises.” It’s important to educate both yourself and your teams on how technologies are evolving, says Chip Kleinheksel, a principal at business consultancy Deloitte. “Educating your organization about transformational changes while simultaneously upskilling for AI and other relevant technical skillsets, will arm team members with the correct resources and knowledge ahead of inevitable change.”


How one CSO secured his environment from generative AI risks

"We always try to stay ahead of things at Navan; it’s just the nature of our business. When the company decided to adopt this technology, as a security team we had to do a holistic risk assessment.... So I sat down with my leadership team to do that. The way my leadership team is structured is, I have a leader who runs product platform security, which is on the engineering side; then we have SecOps, which is a combination of enterprise security, DLP – detection and response; then there’s a governance, risk and compliance and trust function, and that’s responsible for risk management, compliance and all of that. "So, we sat down and did a risk assessment for every avenue of the application of this technology. ... "The way we do DLP here is it’s based on context. We don’t do blanket blocking. We always catch things and we run in it like an incident. It could be insider risk or external, then we involve legal and HR counterparts. This is part and parcel with running a security team. We’re here to identify threats and build protections against them."


Governor at Fed Cautiously Optimistic About Generative AI

The adverse impact of AI on jobs will only be borne by a small set of people, in contrast to the many workers throughout the economy who will benefit from it, she said. "When the world switched from horse-drawn transport to motor vehicles, jobs for stable hands disappeared, but jobs for auto mechanics took their place." And it goes beyond just creating and eliminating positions. Economists encourage a perception of work in terms of tasks, not jobs, Cook said. This will require humans to obtain skills to adapt themselves to the new world. "As firms rethink their product lines and how they produce their goods and services in response to technical change, the composition of the tasks that need to be performed changes. Here, the portfolio of skills that workers have to offer is crucial." AI's benefits to society will depend on how workers adapt their skills to the changing requirements, how well their companies retrain or redeploy them, and how policymakers support those that are hardest hit by these changes, she said.


6 IT rules worth breaking — and how to get away with it

Automation, particularly when incorporating artificial intelligence, presents many benefits, including enhanced productivity, efficiency, and cost savings. It should be, and usually is, a top IT priority. That is, unless an organization is dealing with a complex or novel task that requires a nuanced human touch, says Hamza Farooq, a startup founder and an adjunct professor at UCLA and Stanford. Breaking a blanket commitment to automation prioritization can be justified when tasks involve creative problem-solving, ethical considerations, or situations in which AI’s understanding of a particular activity or process may be limited. “For instance, handling delicate customer complaints that demand empathy and emotional intelligence might be better suited for human interaction,” Farooq says. While sidelining automation may, in some situations, lead to more ethical outcomes and improved customer satisfaction, there’s also a risk of hampering a key organization process. “Overreliance on manual intervention could impact scalability and efficiency in routine tasks,” Farooq warns, noting that it’s important to establish clear guidelines for identifying cases in which an automation process should be bypassed.


Introduction to Azure Infrastructure as Code

One of the core benefits of IaC is that it allows you to check infrastructure code files to source control, just like you would with software code. This means that you can version and manage your infrastructure code just like any other codebase, which is important for ensuring consistency and enabling collaboration among team members. In early project work, IaC allows for quick iteration on potential configuration options through automated deployments instead of a manual "hunt and peck" approach. Templates can be parameterized to reuse code assets, making deploying repeatable environments such as dev, test and production easy. During the lifecycle of a system, IaC serves as an effective change-control mechanism. All changes to the infrastructure are first reflected in the code, which is then checked in as files in source control. The changes are then applied to each environment based on current CI/CD processes and pipelines, ensuring consistency and reducing the risk of human error.


National Cybersecurity Strategy: What Businesses Need to Know

Defending critical infrastructure, including systems and assets, is vital for national security, public safety, and economic prosperity. The NCS will standardize cybersecurity standards for critical infrastructure—for example, mandatory penetration tests and formal vulnerability scans—and make it easier to report cybersecurity incidents and breaches. ... Once the national infrastructure is protected and secured, the NCS will go bullish in efforts to neutralize threat actors that can compromise the cyber economy. This effort will rely upon global cooperation and intelligence-sharing to deal with rampant cybersecurity campaigns and lend support to businesses by using national resources to tactically disrupt adversaries. ... As the world’s largest economy, the U.S. has sufficient resources to lead the charge in future-proofing cybersecurity and driving confidence and resilience in the software sector. The goal is to make it possible for private firms to trust the ecosystem, build innovative systems, ensure minimal damage, and provide stability to the market during catastrophic events.


Preparing for the post-quantum cryptography environment today

"Post-quantum cryptography is about proactively developing and building capabilities to secure critical information and systems from being compromised through the use of quantum computers," Rob Joyce, Director of NSA Cybersecurity, writes in the guide. "The transition to a secured quantum computing era is a long-term intensive community effort that will require extensive collaboration between government and industry. The key is to be on this journey today and not wait until the last minute." This perfectly aligns with Baloo's thinking that now is the time to engage, and not to wait until it becomes an urgent situation. The guide notes how the first set of post-quantum cryptographic (PQC) standards will be released in early 2024 "to protect against future, potentially adversarial, cryptanalytically-relevant quantum computer (CRQC) capabilities. A CRQC would have the potential to break public-key systems (sometimes referred to as asymmetric cryptography) that are used to protect information systems today."


Future of payments technology

Embedded finance requires technology to build into products and services the capability to move money in certain circumstances, such as paying a toll on a motorway. The idea is to embed finance into the consumer journey where they don’t have to actually pay but based on a contract or agreement in advance. Consumers pay without consciously having to dig out their debit card. One example is Uber, where we widely use the service without having to make an actual payment upfront. Sometimes referred to “contextual payments” – where the context of the situation allows for payment to be frictionlessly executed. ... Artificial intelligence is already being used in payments to improve the customer journey and also how products are delivered. So far, this has been machine learning. Generative AI, where the AI itself is able to make decisions, will be the next generational jump and have a huge impact on payments, especially when it comes to protection against fraud. The problem is that artificial intelligence could be a positive or a negative, depending on who gets to exploiting it first, for good or will. 


Hiring revolutionised: Tackling skill demands with agile recruitment

Tech-enabled smart assessment frameworks not only provide scalability and objectivity in talent assessment but also help build a perception of fairness amongst candidates and internal stakeholders. L&T uses virtual assessments at the entry level, and Venkat believes in its tremendous scope for mid-level and leadership assessments too. Apurva shared that when infusing technology, many companies make the mistake of merely making things fancy without actually creating a winning EVP. The key to tech success is balancing personalised training with broader skill requirements. HR must develop a very good funnel by inculcating thought leadership around the quality of employees and must also focus on how these prospective employees absorb the culture of the organisation. This is a huge change exercise that entails identifying the skill gap, restructuring the job responsibilities, mapping specific roles with specific skills, assessing a person’s personality traits, and offering a very personalised onboarding so that people are productive when they join from day one. 


Designing Databases for Distributed Systems

As the name suggests, this pattern proposes that each microservices manages its own data. This implies that no other microservices can directly access or manipulate the data managed by another microservice. Any exchange or manipulation of data can be done only by using a set of well-defined APIs. The figure below shows an example of a database-per-service pattern. At face value, this pattern seems quite simple. It can be implemented relatively easily when we are starting with a brand-new application. However, when we are migrating an existing monolithic application to a microservices architecture, the demarcation between services is not so clear. ... In the command query responsibility segregation (CQRS) pattern, an application listens to domain events from other microservices and updates a separate database for supporting views and queries. We can then serve complex aggregation queries from this separate database while optimizing the performance and scaling it up as needed.



Quote for the day:

"Wisdom equals knowledge plus courage. You have to not only know what to do and when to do it, but you have to also be brave enough to follow through." -- Jarod Kint z

Daily Tech Digest - September 25, 2023

Computer vision's next breakthrough

Beyond quality and efficiency, computer vision can help improve worker safety and reduce accidents on the factory floor and other job sites. According to the US Bureau of Labor Statistics, there were nearly 400,000 injuries and illnesses in the manufacturing sector in 2021. “Computer vision enhances worker safety and security in connected facilities by continuously identifying potential risks and threats to employees faster and more efficiently than via human oversight,” says Yashar Behzadi, CEO and founder of Synthesis AI. “For computer vision to achieve this accurately and reliably, the machine learning models are trained on massive amounts of data, and in these particular use cases, the unstructured data often comes to the ML engineer raw and unlabeled.” Using synthetic data is also important for safety-related use cases, as manufacturers are less likely to have images highlighting the underlying safety factors. “Technologies like synthetic data alleviate the strain on ML engineers by providing accurately labeled, high-quality data that can account for edge cases that save time, money, and the headache inaccurate data causes,” adds Behzadi.


Five years on: the legacy of GDPR

Five years on, “the European regulation has inspired data protection around the world and many countries have put privacy standards in place. These include countries in South America such as Argentina, Brazil, and Chile, and in Asia, such as Japan and South Korea. In Australia, the Privacy Act has been in place since 1988, but was recently amended to mirror GDPR concepts. GDPR has also had a strong influence in the US where several states introduced data protection legislation, including California with the California Consumer Privacy Act, and Colorado with the Colorado Consumer Protection Act. On a federal level, the draft American Data Privacy and Protection Act is another example of where regulation is heading.” So what impact has it had on how organisations are run and data is handled? Aditya Fotedar, CIO at Tintri, a provider of auto adaptive, workload intelligent platforms, explains that while GDPR has ushered in significant changes, they are built upon existing regulations: “GDPR was a progression on the existing EU privacy laws, main changes being the sub processor contractual clauses, right to forget, and size of the fines. 


Embracing Privacy by Design as a Corporate Responsibility

Companies are increasingly realizing the immense importance of a paradigm shift towards Privacy by Design. This is because this approach significantly reduces the cost of adapting to new legislation, builds consumer trust, and carries fewer risks. Data protection is here to stay, and this is a realization that everyone – from companies to legislators to consumers – is becoming more and more aware of and acting upon. The important thing now is to approach data protection more proactively – and to make it a general corporate responsibility. Data protection rights are also human rights! So far, the advertising industry has viewed data protection as a drag, but this perception will have to change as we move through2023. After all, data protection is no longer a limitation, but a selling point. As a result, industry players are beginning to view it as a worthwhile investment rather than a cost. Companies are doing this proactively because they want to stay competitive and keep their brand privacy-centric, and to ensure that customers continue to trust them. 


4 reasons cloud data repatriation is happening in storage

Moving storage to another location means disconnecting on-site storage resources, such as SANs, NAS devices, RAID equipment, optical storage and other technologies. But how likely is it that an IT department making a push to cloud storage clears out the storage section of its data center and makes constructive use of the newly empty space? Not always likely, and the organization is still paying for every square foot of floor space in that data center. Assuming IT managers performed a careful, phased migration from on site to the cloud, they probably would have analyzed the use of space made available from the migration. If the company owns the displaced storage assets, managers must consider what happens to them after a department or application moves out of the data center. From a business perspective, it may make sense to retain these assets and have them ready for use in an emergency. This approach also ensures that storage resources are available if cloud data repatriation occurs, but it doesn't save space -- or money. Continual advances in computing power can mean that repatriation may not require as much physical space for the same or greater processing speeds and storage capacity.


10 digital transformation questions every CIO must answer

Am I engaging people on the front lines to formulate DX plans? According to Rogers, the answer should be yes. “You need people on the front lines, because it is the business units who have people out there talking to customers every day,” he says, adding that while C-suite support for transformation is crucial, the front-line perspectives offered by lower-tier employees are those that can identify where change is needed and can truly impact the business. ... Am I identifying and using the right business metrics to measure progress? Most CIOs have moved beyond using traditional IT metrics like uptime and application availability to determine whether a tech-driven initiative is successful. Still, there’s no guarantee that CIOs use the most appropriate metrics for measuring progress on a DX program, says Venu Lambu, CEO of Randstad Digital, a digital enablement partner. “It’s important to have the technology KPIs linked to business outcomes,” he explains. If your business wants to have faster time to market, improved customer engagement, or increased customer retention, those are what CIOs should measure to determine success.


Unlocking the Value of Cloud Services in the AI/ML Era

As cloud complexity and maturity grow, the goal for businesses should be more than just “lift and shift’’ scenarios, especially when such migrations can result in higher costs. The key is understanding how to unlock the real value of cloud services to meet specific organizational needs. For example, with a clear view of how a vendor’s PaaS and SaaS strengths map to business objectives, organizations can release new features, cut costs, and gain powerful new capabilities to support long-term outcomes using predefined ML models. Success demands that systems be continually evaluated to seek out iterative improvements not be considered a one-off implementation. After all, technology is constantly evolving so there’s no room to be complacent or ignore the environment in which infrastructure operates. This is where human insight and expertise play a crucial role. For example, consider the matter of determining the right public or private cloud vendors for the business. Companies operating in highly regulated regions will need to consider how a cloud vendor can ensure data is compliant to localized regulations.


Insights from launching a developer-led bank

Traditional banks tend to treat policies as their primary tool for problem-solving. While policies are part of the source code that defines how a business operates, they do not define culture. An organisation’s real culture is found in the values and behaviours of the people who work there - how they interact, how they work towards their goals, and how they handle challenges. Culture is defined by who a company chooses to hire, fire, and promote. ... Unfortunately, traditional banks don't place much emphasis on core values and culture during hiring, preferring to focus solely on qualifications and experience. This is why many banks end up with a culture that is at odds with the one they claim to have - which is both misleading to the outside world and a source of strain and cognitive dissonance internally. Your focus should be on building a culture that goes beyond policy documents. You need a holistic recruitment strategy that assesses the candidate’s core values—how they work with others, their perception of accountability, and whether they display kindness and thoughtfulness. 


How global enterprises navigate the complex world of data privacy

Some of the strategies for balancing the need for personalized data analytics against ethical and legal data privacy responsibilities include:Data minimization: As per the previous response, avoid collecting excessive data that could pose a privacy risk and only collect and use that which is specific to the business objective. Transparency: Be transparent in your policies about what is collected, how it’s collected and how it will be used. Ensure explicit consent from your end users. Strong data governance: Ensure strong oversight in areas not only such as data security, but also privacy by design, customer education, audits and reviews to enable data privacy posture to constantly evolve. The balance between customer analytics and privacy is a delicate one that requires an ongoing commitment to fostering a culture of privacy and respect for data and end users within your organization. ... As AI and machine learning technologies continue to evolve, the challenges include ethical, considerations, bias and legal compliance to name a few but the opportunities are also significant. 


Unmasking the MGM Resorts Cyber Attack: Why Identity-Based Authentication is the Future

As seen from the MGM cyber attack, relying on single-factor authentication is a glaring example of outdated security. This method must be revised today when cyber threats are increasingly sophisticated. Although a step in the right direction, multi-factor authentication can fall short if not implemented correctly. For instance, using easily accessible information as a second factor, like a text message sent to a phone, can be intercepted and exploited. The evolution of security measures has brought us from simple passwords to biometrics and beyond. Yet, many businesses are stuck in the past, relying on these half-measures. It’s not just about keeping up with the times; it’s about safeguarding your organization’s future. One-size-fits-all solutions are ineffective, and risk-based authentication should be the norm, not the exception. ... Security half-measures, like using codes, devices, or unverified biometrics as identity proxies, are more than just weak points; they open doors for cybercriminals. The MGM breach is a stark reminder of the dangers of compromised security. 


Metrics-Driven Developer Productivity Engineering at Spotify

An engineering department could have an OKR on the lagging metric of MTTR and a platform team supporting SREs would have a leading metric of log ingestion speed. These would both be in support of the company-level OKR to increase customer satisfaction, which is measured by things like net promoter scores (NPS), active users and churn rate. This emphasizes one of the important goals of platform engineering which is to increase engineers’ sense of purpose by connecting their work more closely to delivering business value. “Productivity cannot be measured easily. And certainly not with a single accurate number. And probably not even with a few of them. So these metrics about SRE efficiency or developer productivity, they need to be contextualized for your own company, your tech stack, your team even,” he said, emphasizing that the trends are typically more important than the actual values. “That does not mean that we cannot have a productive conversation about them. But it does mean there is no absolute way to measure” developer productivity, knowing that proxy metrics will never capture everything.



Quote for the day:

''A good plan executed today is better than a perfect plan executed tomorrow.'' -- General George Patton

Daily Tech Digest - September 24, 2023

How legacy systems are threatening security in mergers & acquisitions

Legacy systems are far more likely to get hacked. This is especially true for companies that become involved in private equity transactions, such as mergers, acquisitions, and divestitures. These transactions often result in IT system changes and large movements of data and financial capital which leave organizations acutely vulnerable. With details of these transactions being publicized or publicly accessible, threat actors can specifically target companies likely to be involved in such deals. We have seen two primary trends throughout 2023: Threat groups are closely following news cycles, enabling them to quickly target entire portfolios with zero-day attacks designed to upend aging technologies; disrupting businesses and their supply chains; Corporate espionage cases are also on the rise as threat actors embrace longer dwell times and employ greater calculation in methods of monetizing attacks. Together, this means the number of strategically calculated attacks — which are more insidious than hasty smash-and-grabs — are on the rise. 


How Frontend Devs Can Take Technical Debt out of Code

To combat technical debt, developers — even frontend developers — must see their work as a part of a greater whole, rather than in isolation, Purighalla advised. “It is important for developers to think about what they are programming as a part of a larger system, rather than just that particular part,” he said. “There’s an engineering principle, ‘Excessive focus on perfection of art compromises the integrity of the whole.’” That means developers have to think like full-stack developers, even if they’re not actually full-stack developers. For the frontend, that specifically means understanding the data that underlies your site or web application, Purighalla explained. “The system starts with obviously the frontend, which end users touch and feel, and interface with the application through, and then that talks to maybe an orchestration layer of some sort, of APIs, which then talks to a backend infrastructure, which then talks to maybe a database,” he said. “That orchestration and the frontend has to be done very, very carefully.” Frontend developers should take responsibility for the data their applications rely on, he said.


Digital Innovation: Getting the Architecture Foundations Right

While the benefits of modernization are clear, companies don’t need to be cutting edge everywhere, but they do need to apply the appropriate architectural patterns to the appropriate business processes. For example, Amazon Prime recently moved away from a microservices-based architecture for streaming media. In considering the additional complexity of service-oriented architectures, the company decided that a "modular monolith” would deliver most of the benefits for much less cost. Companies that make a successful transition to modern enterprise architectures get a few things right. ... Enterprise technology architecture isn’t something that most business leaders have had to think about, but they can’t afford to ignore it any longer. Together with the leaders of the technology function, they need to ask whether they have the right architecture to help them succeed. Building a modern architecture requires ongoing experimentation and a commitment to investment over the long term.


GenAI isn’t just eating software, it’s dining on the future of work

As we step into this transformative era, the concept of “no-collar jobs” takes center stage. Paul introduced this idea in his book “Human + Machine,” where new roles are expected to emerge that don’t fit into the traditional white-collar or blue-collar jobs; instead, it’s giving rise to what he called ‘no-collar jobs.’ These roles defy conventional categories, relying increasingly on digital technologies, AI, and automation to enhance human capabilities. In this emergence of new roles, the only threat is to those “who don’t learn to use the new tools, approaches and technologies in their work.” While this new future involves a transformation of tasks and roles, it does not necessitate jobs disappearing. ... Just as AI has become an integral part of enterprise software today, GenAI will follow suit. In the coming year, we can expect to see established software companies integrating GenAI capabilities into their products. “It will become more common for companies to use generative AI capabilities like Microsoft Dynamics Copilot, Einstein GPT from Salesforce or, GenAI capabilities from ServiceNow or other capabilities that will become natural in how they do things.”


The components of a data mesh architecture

In a monolithic data management approach, technology drives ownership. A single data engineering team typically owns all the data storage, pipelines, testing, and analytics for multiple teams—such as Finance, Sales, etc. In a data mesh architecture, business function drives ownership. The data engineering team still owns a centralized data platform that offers services such as storage, ingestion, analytics, security, and governance. But teams such as Finance and Sales would each own their data and its full lifecycle (e.g. making code changes and maintaining code in production). Moving to a data mesh architecture brings numerous benefits:It removes roadblocks to innovation by creating a self-service model for teams to create new data products: It democratizes data while retaining centralized governance and security controls; It decreases data project development cycles, saving money and time that can be driven back into the business. Because it’s evolved from previous approaches to data management, data mesh uses many of the same tools and systems that monolithic approaches use, yet exposes these tools in a self-service model combining agility, team ownership, and organizational oversight.


Six major trends in data engineering

Some modern data warehouse solutions, including Snowflake, allow data providers to seamlessly share data with users by making it available as a feed. This does away with the need for pipelines, as live data is shared in real-time without having to move the data. In this scenario, providers do not have to create APIs or FTPs to share data and there is no need for consumers to create data pipelines to import it. This is especially useful for activities such as data monetisation or company mergers, as well as for sectors such as the supply chain. ... Organisations that use data lakes to store large sets of structured and semi-structured data are now tending to create traditional data warehouses on top of them, thus generating more value. Known as a data lakehouse, this single platform combines the benefits of data lakes and warehouses. It is able to store unstructured data while providing the functionality of a data warehouse, to create a strategic data storage/management system. In addition to providing a data structure optimised for reporting, the data lakehouse provides a governance and administration layer and captures specific domain-related business rules.


From legacy to leading: Embracing digital transformation for future-proof growth

Digital transformation without a clear vision and roadmap is identified as a big reason for failure. Several businesses may adopt change because of emerging trends and rapid innovation without evaluating their existing systems or business requirements. To avoid such failure, every tech leader must develop a clear vision, and comprehensive roadmap aligned with organizational goals, ensuring each step of the transformation contributes to the overarching vision. ... The rapid pace of technological change often outpaces the availability of skilled professionals. In the meantime, tech leaders may struggle to find individuals with the right expertise to drive the transformation forward. To address this, businesses should focus on strategic upskilling using IT value propositions and hiring business-minded technologists. Furthermore, investing in individual workforce development can bridge this gap effectively. ... Many organizations grapple with legacy systems and outdated infrastructure that may not seamlessly integrate with modern digital solutions. 


7 Software Testing Best Practices You Should Be Talking About

What sets the very best testers apart from the pack is that they never lose sight of why they’re conducting testing in the first place, and that means putting user interest first. These testers understand that testing best practices aren’t necessarily things to check off a list, but rather steps to take to help deliver a better end product to users. The very best testers never lose sight of why they’re conducting testing in the first place, and that means putting user interest first. To become such a tester, you need to always consider software from the user’s perspective and take into account how the software needs to work in order to deliver on the promise of helping users do something better, faster and easier in their daily lives. ... In order to keep an eye on the bigger picture and test with the user experience in mind, you need to ask questions and lots of them. Testers have a reputation for asking questions, and it often comes across as them trying to prove something, but there’s actually an important reason why the best testers ask so many questions.


Why Data Mesh vs. Data Lake Is a Broader Conversation

Most businesses with large volumes of data use a data lake as their central repository to store and manage data from multiple sources. However, the growing volume and varied nature of data in data lakes makes data management challenging, particularly for businesses operating with various domains. This is where a data mesh approach can tie in to your data management efforts. The data mesh is a microservice, distributed approach to data management whereby extensive organizational data is split into smaller, multiple domains and managed by domain experts. The value provided by implementing a data mesh for your organization includes simpler management and faster access to your domain data. By building a data ecosystem that implements a data lake with data mesh thinking in mind, you can grant every domain operating within your business its product-specific data lake. This product-specific data lake helps provide cost-effective and scalable storage for housing your data and serving your needs. Additionally, with proper management by domain experts like data product owners and engineers, your business can serve independent but interoperable data products.


The Hidden Costs of Legacy Technology

Maintaining legacy tech can prove to be every bit as expensive as a digital upgrade. This is because IT staff have to spend time and money to keep the obsolete software functioning. This wastes valuable staff hours that could be channeled into improving products, services, or company systems. A report from Dell estimates that organizations currently allocate 60-80% of their IT budget to maintaining existing on-site hardware and legacy apps, which leaves only 20-40% of the budget for everything else. ...  No company can defer upgrading its tech indefinitely: sooner or later, the business will fail as its rivals outpace it. Despite this urgency, many business leaders mistakenly believe that they can afford to defer their tech improvements and rely on dated systems in the meantime. However, this is a misapprehension and can lead to ‘technical debt.’ ‘Technical debt' describes the phenomenon in which the use of legacy systems defers short-term costs in favor of long-term losses that are incurred when reworking the systems later on. 



Quote for the day:

"Always remember, your focus determines your reality." -- George Lucas