Daily Tech Digest - October 03, 2023

How AI can be a ‘multivitamin supplement’ for many industries

It won’t replace humans in the same way that supplements don’t replace a healthy diet. Still, it will strengthen companies’ existing operations and fill in the gaps that are currently making work more burdensome for human laborers. ... It’s exciting to realize that there will soon be professions that we don’t even have names for yet. As the technology ages and matures and governing bodies create the necessary laws and regulations, our current state of uncertainty will transform into an exciting, bright new future of human-tech cooperation. We are already seeing this future take shape. For instance, MarTech companies are testing AI-powered fraud detection to supplement the work that human experts do to monitor traffic quality and transparency. This not only eases the human workload but helps companies save resources while getting better results overall. Similar benefits of human-AI collaboration can be seen in healthcare, with AI that can be trained to assist patients with recovery treatments or perform routine tasks in medical offices or hospitals, freeing nurses and doctors up to focus on patient outcomes. 


Banking on Innovation: How Finance Transforms Technological Growth for Decision Makers

Regulation is a sensitive topic for the financial industry. While the need for a certain degree of oversight is universally accepted, excessive regulation can stifle the very innovation that drives economic growth. On the other hand, too little regulation can open the doors to risk accumulation and financial crises. Striking this balance is one of the most challenging tasks that government leaders face. Policies must be evidence-based, derived from transparent risk- assessment models and economic simulations. Regulatory sandboxes could offer a safe environment for financial institutions to experiment with new services and products under the watchful eye of regulators, thereby fostering innovation while ensuring compliance. ... One of the most potent ways in which PPPs can contribute to revenue management is through asset monetization. Governments often sit on a wealth of underutilized assets, ranging from real estate to utilities. A PPP can unlock the value of these assets by involving private-sector expertise and investment. 


Microsoft Releases Its Own Distro of Java 21

Microsoft’s continuing support for OpenJDK is a strong indicator of how important Java is in the enterprise software space. “And the new features of Java 21 such as lightweight threads are maintaining Java’s relevance in the cloud native age,” said Mike Milinkovich, executive director of the Eclipse Foundation. “Being one of the first vendors to ship Java SE 21 support shows how focused Microsoft is in meeting the needs of Java developers deploying workloads on Azure.” Also, Spring developers will be pleased to know that Spring Boot 3.2 now supports Java 21 features. Many other frameworks and libraries will soon release their JDK 21-supported versions. “Microsoft has some of the best developer tool makers in the world — to have them add Java to the mix makes sense,” said Richard Campbell, founder of Campbell & Associates. “Of course, that happened a couple of years ago, and JDK 21 is just the latest implementation. In the end, Microsoft wants to ensure that Azure is a great place to run Java, so having a team working on Java running in Azure helps to make that true. What does it mean for the ecosystem? More choices for implementations of Java, better Java tooling, and more places to run Java fast and securely.”


Why embracing complexity is the real challenge in software today

The reason we can’t just wish away or “fix” complexity is that every solution — whether it’s a technology or methodology — redistributes complexity in some way. Solutions reorganize problems. When microservices emerged (a software architecture approach where an application or system is composed of many smaller parts), they seemingly solved many of the maintenance and development challenges posed by monolithic architectures (where the application is one single interlocking system). However, in doing so microservices placed new demands on engineering teams; they require greater maturity in terms of practices and processes. This is one of the reasons why we cautioned people against what we call “microservice envy” in a 2018 edition of the Technology Radar, with CTO Rebecca Parsons writing that microservices would never be recommended for adoption on Technology Radar because “not all organizations are microservices-ready.” We noticed there was a tendency to look to adopt microservices simply because it was fashionable. This doesn’t mean the solution is poor or defective. 


Balancing Cost and Resilience: Crafting a Lean IT Business Continuity Strategy

Effective monitoring is the backbone of a resilient infrastructure. The approach should focus on: Filtering out the noise - Monitoring solutions need to ensure that only critical notifications are sent out, preventing information overload and ensuring that the right people are alerted promptly when critical events inevitably happen. Acting quickly and decisively - Time is of the essence during disruptions. IT, DevOps, SIRT, and even PR teams need to be well coordinated for various types of events. From security breaches to data center fires or even just mundane equipment failures, anything that might result in customer or operation disruptions will involve cross-team communications and collaboration. The only way to get better at handling these is to have documentation on what should be done, a clear chain of command, and practice drills. In conclusion, a comprehensive backup and recovery strategy is essential for businesses aiming for uninterrupted operations. While there are many solutions available in the market, it’s crucial to find one that aligns with your business needs. 


How do you solve a problem like payments infrastructure?

Today, banks need to be willing to adopt new technology to change, and this will involve working with a third-party service provider. Another roundtable participant added that as part of this process, it is imperative to utilise validation evaluation to recycle new enhancements. Otherwise, banks will end up with the belief that the improvements that were made are unique, but in fact, competitors will keep pace or even get ahead when it comes to the innovation game or enticing new customers. This banker revealed that they opted to not disconnect from their existing infrastructure, but instead chose a top layer architecture to process payments in a more efficient way. In line with this, the participant added that culture must be considered, because this is what brings together the different components that are needed and ultimately reveals when the time is right to change the systems. Providing background information, this Sibos attendee mentioned that 15 years ago, the bank considered whether it would be more cost effective to map local, regional, or global ISO 20022 messaging into existing architecture or to create a new platform that could work for the next 20 years. 


GenAI: friend or foe in fraud risk management?

Building high-performance fraud detection algorithms today is dependent on real-life customer and transaction data to train and validate the models, which has remained a constraint. GenAI can help with realistic synthetic data creation for model training and validation, scenario and fraud attack simulation to identify vulnerabilities and design controls to mitigate these risks. Customer due diligence (CDD) is a critical function in fraud prevention – be it new client onboarding or new credit approvals (loans, credit cards, increasing credit limits) for existing clients. GenAI can be a great tool to go through piles of KYC documentation and reference them with customer-filled forms and other subscribed data sources of the FI to come up with a CDD summary report. GenAI can also be used to analyse user communications with FIs – such as emails, chats, documents and product and service requests – to extract insights on financial behaviour, sentiment analysis for intentions and potential risks of fraud. Fraud investigations can also leverage GenAI for alert and dispute resolution by accessing different sources of information on the context and providing a summary of the case that will aid in its decisioning.


Weaving Cyber Resilience into the Strategic Fabric of Higher Education Institutions

There is no shortage of steps that institutions can take to bolster their cyber resilience and ensure that, should the worst happen, they’re prepared. A good place to start is by assessing the institution’s current level of resilience and looking for any gaps or obstacles. In many cases, Goerlich says, the key is simplification. For example, adopting a zero-trust security strategy can also improve a college or university’s ability to respond, maintain continuity and bounce back following an adverse event, he says. Another factor complicating resiliency for many institutions is overly complex network environments, particularly in the cloud. As colleges and universities clamor to embrace digital transformation and cloud networking, it’s not uncommon for their environments to grow to a degree that becomes unmanageable. But uncontrolled and unregulated cloud sprawl can have a serious impact on an institution’s resilience. Developing easy-to-follow approaches and processes — along with adopting simplified, automated and easy-to-use technology solutions — can make a significant difference, Goerlich says. 


How to make asynchronous collaboration work for your business

Asynchronous working can bring some benefits that synchronous work can't – most notably speed. “Real-time communication means everyone must be in the same place, or at least the same time zone, in order for work to happen. If workers need to wait for syncs to decide or act on something, it slows down the company as a whole and reduces its ability to compete,” says van der Voort. Asynchronous collaboration allows people to work at their own pace, and does not force them to wait for input from others. Morning people, evening people, midnight oil people, collaborating across geographies, can in some cases deliver higher quality results than forcing everyone to come together for a 10am video call. To get this working well, policies such as having core working hours for each staff member, and having very clear goals and anticipated outcomes for all meetings, can be incredibly useful. “One of the most significant and highly sought-after benefits asynchronous collaboration offers is a dramatic reduction in meetings,” argues Lawyer. “It allows team members to contribute in the least amount of minutes, freeing up time for other work.”


Securing the Evolution of Smart Home Applications

Very few in the cybersecurity community have forgotten one of the most noteworthy incidents, the Mirai Botnet, which took place back in 2017. Attackers behind the botnet infiltrated the site of well-known cybersecurity journalist Brian Krebs. The Distributed Denial of Service (DDoS) attack lasted for days, 77 hours to be exact. It involved 24,000 Mirai-infected Internet-of-Things devices, including personal surveillance cameras. Jumping ahead to this year 2023, in June the Federal Trade Commission (FTC) settled a case with Ring’s owner, Amazon. The online retailing giant agreed to pay the FTC nearly $31 million in penalties to settle recently filed federal lawsuits over privacy violations. The FTC alleged that Ring compromised customer privacy by allowing any employee or contractor to access consumers’ private videos. The FTC also claimed hackers used Ring cameras’ two-way functionality to harass and even physically threaten consumers – including children – if they did not pay a ransom. These types of incidents clearly illustrate how critical it is to secure devices like cameras in a smart home.



Quote for the day:

"Before you are a leader, success is all about growing yourself when you become a leader, success is all about growing others." -- Jack Welch

Daily Tech Digest - October 02, 2023

Want people to embrace transformation? Allow them to own the change

The principles for a co-optable resource are straightforward: for starters, it must be accessible. Accessible means it must be opt-in, no mandates, no obvious carrots or sticks, and it is owned by those opting in. The barriers to entry must be low, and the benefits of using the resource have to be easy to communicate to others. Finally, it must be both impactful—that is, delivering practical value to its users—and scalable. In each of the following examples, a co-optable resource led to widespread uptake of a new idea or technology. The first one shows how a small organization was able to replicate itself globally by sharing the heavy lifting of making an idea scalable—an important lesson for managers who are daunted by introducing new ways of working because they feel the burden is all on them. The other two examples show how it’s possible to get enthusiasts within organizations to scale the use of technology, transform a business model, and change ways of working.


Weed Out Bad Data to Make Better Business Decisions

Using bad data for analytics, AI, and other apps can have catastrophic consequences for any organization. The worst-case scenario is making poor business decisions with that data – whether it’s investments, product changes, or hiring moves. Ignoring and not removing bad data results in misleading insights and misguided choices. It’s like blindly following a GPS without verifying its accuracy or knowing its end goal. You could potentially drive yourself into the ocean. It also has a broader chilling effect on a company. When bad data leads to skewed or inaccurate insights, employees lose trust in the data and systems more broadly. As a result, they stop relying on the data to make decisions altogether and instead devolve to making decisions based on gut feeling. At a bare minimum, bad data should be weeded out as often as you use it to make decisions. Ideally, though, it should happen upon the ingestion of the data. Constantly removing bad data as soon as it enters the system is the only way to reliably avoid polluting the clean data source.


California’s Delete Act: What, CIOs, CDOs, Businesses Need to Know

The bill says consumers can delete data by using a website that will be hosted by the California Privacy Protection Agency, which has a 2026 deadline to create the website. In 2026, data brokers registered with the state must process delete requests once a month and undergo third-party audits every three years starting in 2028. Brokers who don’t comply will face daily fines. California’s law is not the first state law to target data brokers. Vermont, Texas, and Oregon all have laws creating broker registries. Vermont’s law has been in effect since 2019. California’s Data Broker law defines a data broker as “a business that knowingly collects and sells to third parties the personal information of a consumer with whom the business does not have a direct relationship.” While there is a federal data privacy bill, the American Data Privacy Protection Act (ADPPA), the proposal is currently in US Congress limbo and chances for passage are unclear. ADPPA would instruct the Federal Trade Commission (FTC) to create a national registry of data brokers and create a “do not collect” mechanism for individuals to opt out of personal data collection.


Global events fuel DDoS attack campaigns

NETSCOUT’s insights into the threat landscape come from its ATLAS sensor network built over decades of working with hundreds of Internet Service Providers globally, gleaning trends from an average of 424 Tbps of internet peering traffic, an increase of 5.7% over 2022. The company has observed nearly 500% growth in HTTP/S application layer attacks since 2019 and 17% growth in DNS reflection/amplification volumes during the first half of 2023. “While world events and 5G network expansion have driven an increase in DDoS attacks, adversaries continue to evolve their approach to be more dynamic by taking advantage of bespoke infrastructure such as bulletproof hosts or proxy networks to launch attacks,” stated Richard Hummel, senior threat intelligence lead, NETSCOUT. “The lifecycle of DDoS attack vectors reveals the persistence of adversaries to find and weaponize new methods of attack, while DNS water torture and carpet-bombing attacks have become more prevalent.”


Multibillion-dollar cybersecurity training market fails to fix the supply-demand imbalance

The good news is that retention increased, with a 6% drop in the number of respondents reporting retention issues compared to the previous year. But this improvement is more likely tied to economic uncertainty rather than work conditions having improved. The main reasons for employees departing included recruitment by other companies (58%). The second highest response, poor financial incentives (e.g., salaries or bonuses), is likely the main driver, ISACA found. Those seeking better financial compensation increased by 6% from last year to 54%. While work stress levels dropped by two percentage points from 2022, it remains a contributing factor at 43%, ranking fourth on the list. Other notable reasons included limited remote work possibilities (increased by four percentage points from 2022) and poor work culture/environment, both potentially driven by return-to-work mandates. "Uncertainty of any kind appears to be driving fewer job changes, and while vacancies persist, the survey results indicate that enterprises appear to be tightening budgets and compensation aids ahead of a potential recession," read the report.


Prompt Engineering in Software Automation

While these problems can’t be ignored, there is still a lot of justifiable excitement about how these programs can help democratize software development by supporting technical and non-technical teams alike. Perhaps the most impressive thing to consider is that tools like ChatGPT can produce functional code very quickly. With the right prompt, engineers can reduce the time it takes to program certain types of code, ensuring a swifter software development life cycle. At the end of 2022, the popular programming hub Stack Overflow banned AI-generated answers on its forum. They cited the high error rate and inaccuracies associated with the application. However, the technology is in a nascent stage; furthermore, the dissatisfaction with AI-generated output owes as much to poor prompt engineering as it does to the technology itself. Despite the misgivings over the tech, a recent piece by McKinsey highlights the impact that prompt engineering is already having in the world of programming. The consulting firm’s 


Hackers Impersonate Meta Recruiter to Target Aerospace Firm

The attack is part of an ongoing campaign tracked as "Operation DreamJob," in which fake recruiters reach out through LinkedIn. Attackers convince victims to self-compromise their systems by employing different strategies such as luring the target to execute a malicious PDF viewer to see the full contents of a job offer. Or, they encourage the victim to connect with a Trojanized SSL/VPN client. "The most worrying aspect of the attack is the new type of payload, LightlessCan, a complex and possibly evolving tool that exhibits a high level of sophistication in its design and operation, representing a significant advancement in malicious capabilities compared to its predecessor, BlindingCan," researchers said. Eset says is observed victims receiving two malicious executables, Quiz1.exe and Quiz2.exe, which were delivered via .iso images hosted on a third-party cloud storage platform. "The first challenge is a very basic project that displays the text 'Hello, World!'" researchers said. "The second prints a Fibonacci sequence up to the largest element smaller than the number entered as input."


Technology is Crack and We are the Dealers

What is actually going on though is not really sinister, it is just stupid. For years most technology did not really impact lives outside of military, reactors, planes, infrastructure and the like… then medicine, electrical grids, and finances joined the group. And so forth. But most technology was just corporate enablement. No one was going to die if the order management system went down for an hour. Maybe get fired but not die. Thus we chose to use standards and review (governance) as our primary mechanism for quality decisions. And even these were flaky at best and pretty easy to get around (please like I can’t game a governance review board? hahahaha). The people reviewing had their checklists and the delivery folks knew how to make them happy enough. Or just go to the executive sponsor who goes to the executives and gets a ‘pass’. Oh well, it’s just a bit of technical debt! The future is coming to humanity. That much is certain. But at what rate? What is acceptable loss? How will society get a handle on run-away technology? And which organizations will survive? 


The dark arts of digital transformation — and how to master them

“If you’re in a leadership role in Engineering, you aren’t going to succeed unless you have a strong ally in Product,” says Etkin. “Developers sometimes have this idea that management isn’t necessary, or they have disdain for the nontechnical side of things. That’s a terrible idea that will get you absolutely nowhere.” Etkin, an early employee at Atlassian who was the original architect of Jira, admits that he wasn’t always good at building alliances with his peers. He had to figure out how to get on the same page with people who often had very different ideas about how to proceed. That meant asking a lot of questions and listening to the answers. ... A key thing to remember is that the dark forces you’re attempting to subdue may not be the individuals opposing you, but the systems in which they themselves are trapped. Organizations that have found success operating in a certain way may see little reason to shake things up. Even when the changes are necessary, such as in the case of increased competition from disruptive new entrants or the emergence of transformative technologies, the effort required to overcome internal inertia could exhaust all your magic powers.


Regulations Push Firms to Boost AI, ML Spend

Unlike some industries, though, financial services are highly regulated, given the industry’s stature as the modern economy’s backbone. “The industry as a whole must be cautious about adopting new technologies given the myriad of rules and regulations at play,” cautions Joe Robinson, CEO, Hummingbird. “Financial institutions can plan to leverage the opportunities that AI presents but must do so carefully.” He says by using explainable algorithms, auditable decision-making processes, and/or human-in-the-loop reviews, they can take advantage of the potential of AI while ensuring that regulatory obligations are met. “As with many new technologies, it's best to start small, observe outcomes, and scale up thoughtfully and pragmatically,” he says. Cullen adds it’s critical to ensure the needed talent infrastructure is in place. “Determine where you should hire and where you may need to augment, especially in relation to the evolving regulatory landscape,” she says.



Quote for the day:

”Taking a step back can often be the quickest way forward.” — Tim Fargo

Daily Tech Digest - October 01, 2023

The future of work is human-AI synergy

AI and humans can work in sync by capitalising on their respective strengths. AI's ability to automate routine tasks liberates human workers to focus on more complex and nuanced responsibilities, where their human touch is indispensable. This dynamic significantly amplifies productivity and allows employees to dedicate their time to strategic thinking and fostering innovation. AI's application in Big Data Analytics equips human workers with invaluable insights, enabling them to make quicker, more informed decisions with heightened precision. For instance, financial institutions employ AI analytics to rapidly evaluate loan applications, while healthcare professionals use AI algorithms to swiftly diagnose serious illnesses from patient data. However, it's crucial to emphasise that AI serves as a valuable tool rather than a replacement for human workers. The efficiency and productivity gains result from the synergy between human intelligence and AI capabilities.


10 Strategies for Simplified Data Management

Centralization means creating a unified, accessible, and authoritative store for all of your organizational data. Users and processes can then leverage and manage otherwise distinct data in a convenient, coherent fashion. The two main approaches here are data lakes and data warehouses. A data lake is a large repository of different kinds of data - all stored in their original format. This provides a valuable resource, as we can apply any kind of transformations and aggregation we need for analysis. A data warehouse differs from a data lake in the sense that it is stored in a format and structure that’s defined for a specific purpose. This is useful if we need to carry out similar analytical operations on a large scale. ... As we said earlier, an enterprise data model is a detailed account of all of the data assets that are involved in core business processes - along with where each of these is sourced from, what they’re used for, and how they relate to each other. This is effectively a data-centric representation of how your business works. In turn, an effective data model brings along several important benefits. 


Data Quality Assessment: Measuring Success

A Data Quality assessment will move along more efficiently and provide better results if a list of concerns and goals is created before the assessment. When creating this list, be aware of the organization’s long-term goals, while listing short-term goals. For example, the long-term goal of making the business more efficient can be broken down into smaller goals, such as fixing the system so the right people get the right bills, and that all the clients’ addresses are correct, etc. This list can also be presented to a board of directors as a rationale for initiating and paying for Data Quality assessment software or hiring a contractor to perform the assessment. The basic steps for creating the list are presented below.Start by making a list of Data Quality problems that have occurred over the last year. Spend a week or two observing the flow of data and determine what looks questionable, and why. Share your observations with other managers and staff, get feedback, and adjust the results using the feedback. 


Test Architecture: Creating an Architecture for Automated Tests

The test architecture is important, especially when you are dealing with a complex project or expecting the project to grow in the near future. The test architecture helps to reduce the risks and eliminate the assumptions before delivery. As you are aware, anything you do randomly may not help in a better outcome. The test architecture streamlines the entire process of testing. Unlike other testing activities, it’s not focused on a single testing activity rather it is focused on the entire testing and the testing team aims to deliver a high-quality product. ... The test architect works with multiple teams such as development, DevOps, testing, and business/product team. So the test architect is responsible for communication with stakeholders. If there are any challenges from the development team he should be able to work with them and get them resolved. The complexity of the test architecture for automation depends on the tool you choose. Because some tools require creating the framework, some come with a framework ready. Not all the tools require coding, so the activities that are involved in defining the coding standards and setup will be reduced. 


7 Cybersecurity Questions That Can Transform Your Business

Anyone who has spent any time thinking about cybersecurity knows how multifaceted and complicated our digital supply chains are today. That means we need to empower people who are working directly with the different touch points in the supply chain and elevate their cybersecurity thinking. They need all the information and resources available to ensure they only push secure software to customers. ... You may have a long list of audits and other compliance procedures in progress currently. This is where I ask you to remember that the point of a canvas like this is that it is one page! While that may not give you all the room to include every initiative, that may be a good thing. Instead of starting new small-scale initiatives, consider, for instance, adopting or enhancing a DevSecOps approach that could transform your security efforts. ... When we talk about costs here, we mean actual costs. This includes external consultant fees, CISO office salaries, MDR subscriptions, security training and platform subscriptions. When confronted with these numbers, we can make decisions that aren’t only guided by whims or immediate needs.


Why Cloud Native Expertise Is so Hard to Hire for, and What to Do Instead

Fortunately, there are alternatives for organizations looking to develop their cloud native expertise. One of the most popular options is to work with a third-party provider that specializes in providing cloud native services — so you don’t have to. This is a core component of what entails “ZeroOps,” or rather, the notion of freeing your own employees to take their time back, and letting someone else do the time-consuming, bothersome stuff. Working with a third-party provider can provide organizations with high levels of expertise and resources while allowing your team to focus on their core business — innovating, creating, and making a measurable impact. This can result in significant cost savings and increased efficiency, as the provider takes on the responsibility of managing complex cloud native solutions. Many providers can offer comprehensive services, ranging from architecture to software engineering and deployment, and can tailor their services to an organization’s unique requirements — of which we know there are many.


The CISO Carousel and Its Effect on Enterprise Cybersecurity

“There is still a prevalent perception that CISOs are viewed as scapegoats in serious breach events,” adds George Jones, CISO at Critical Start. “This is based on a general lack of understanding, high expectations, and accountability associated with the role. When a breach occurs, it’s easy to point the finger at the person responsible for cybersecurity.” It’s the effect, says Yu, of “accountability without authority”. Making the CISO a scapegoat is a common but not blanket response to cybersecurity incidents. Agnidipta Sarakar, VP and CISO advisory at ColorTokens, points out, “Organizations who are mature tend not to blame the CISO unless the security program is actually not good enough.” But less mature organizations with weaker programs or negligent security oversight will readily activate the scapegoat effect. ... Globally, there are many companies where cybersecurity is both prioritized and supported, but these tend to be among the larger and more mature organizations. There remains a large underswell of newer and smaller companies where growth is often prioritized over security.


Closing the skills gap in the AI era: A global imperative

To tackle this reskilling challenge on a large scale, we require a combined effort from the government, education, and private sector. This can be achieved through the following ways: Make learning achievable: Instead of diving into the deeply technical aspects of AI, companies can begin by introducing the workforce to tools that require no-code or low-code experience Further, citizen development programs can be implemented. These programs encourage employees to be innovative problem solvers and foster a sense of ownership as they witness the direct impact of their work on business outcomes using no-code/low-code tools. These programs allow them to savour initial automation successes almost immediately and to envision greater possibilities for bots to help them in the future. Take advantage of existing partnerships: Companies should leverage the knowledge of their existing technology partners to quickly roll out skilling programs. The National Health Service in the UK, for example, was able to offer its 1.7 million employees automation training via the help of its technology partner.


Could APIs undermine Zero Trust?

APIs come in various shapes and flavours. As well as being internal or public facing, they might interface in numerous ways, from a single API providing access to a service mechanism, to aggregated APIs that then use another as the point of entry, to APIs that act as the go-between between various non-compatible applications, or partner/third party APIs. They are also problematic to monitor and secure using traditional mechanisms. Segmentation and deep inspection technology at the layer 7 network level can miss APIs completely, resulting in those shadow APIs, while application level 4 protection methods such as web application firewalls (WAFs) which use signature-based threat detection will miss the kind of abuse that typically leads to API compromise. Often, APIs are not’ hacked’ as such, but their functionality is used against them in business logic abuse attacks and so it’s the behaviour of the API request and resulting traffic that needs to be observed. Yet it’s clear that APIs must be included in ZTA. 


Transforming Decision-Making Processes

GenAI has quickly become a part of everyday conversations from the boardroom to the kitchen table. One specific topic of interest is the role genAI can play in enhancing and improving an organization’s decision-making paradigm. Organizations should look for AI engines that combine the power of artificial intelligence, machine learning, and generative AI to further advance the democratization of analytics. This can reduce the time required to derive insights from data. With AI and cloud-native analytics automation, the power and scale of better decision-making is at everyone’s fingertips. While it is still the early days for genAI, we see this newer capability accelerating the path for organizations to become more insights driven in their decision-making. Natural language processing translates insights into business language that can be shared broadly and leveraged by all. GenAI and large language models (LLMs) eliminate tedious tasks, leverage best practices from millions of workflows in production, automatically document workflows, and free up time for humans to focus on more strategic challenges.



Quote for the day:

"People often say that motivation doesn't last. Well, neither does bathing - that's why we recommend it daily." -- Zig Ziglar

Daily Tech Digest - September 29, 2023

Why root causes matter in cybersecurity

In the cybersecurity industry, unfortunately, there is no official directory of root causes. Many vendors categorize certain attacks as root causes when in reality, these are often outcomes or symptoms. For example, ransomware, remote access, stolen credentials, etc., are all symptoms, not root causes. The root cause behind remote access or stolen credentials is most likely human error or some vulnerability. ... The true root cause is human error. People are prone to mistakes, ignorance, and biases. We open malicious attachments, click on wrong links, surf the wrong websites, use weak credentials, and reuse passwords across multiple sites. We use unauthorized software, make public our private details via posting on social media for bad actors to scrape and harvest. We take security far too much for granted. Human error in cybersecurity is a much larger problem than previously anticipated or documented. To clamp down on human error, organizations must train employees enough so they can develop a security instinct and improve security habits. Clear policies and procedures must be in place, so everyone understands their responsibility and accountability towards the business.


Running Automation Tests at Scale Using Java

As customer decision making is now highly dependent on digital experience as well, organisations are increasingly investing in quality of that digital experience. That means establishing high internal QA standards and most importantly investing in Automation Testing for faster release cycles. So, how does this concern you as a developer or tester? Having automation skills on your resume is highly desirable in the current employment market. Additionally, getting started is quick. Selenium is the ideal framework for beginning automation testing. It is the most popular automated testing framework and supports all programming languages. This post will discuss Selenium, how to set it up, and how to use Java to create an automated test script. Next, we will see how to use a java based testing framework like TestNG with Selenium and perform parallel test execution at scale on the cloud using LambdaTest.


How Generative AI Can Support DevOps and SRE Workflows

Querying a bunch of different tools for logs and a bunch of different observability data and outputs manually requires a lot of time and knowledge, which isn’t necessarily efficient. Where is that metric? Which dashboard is in? What’s the machine name? How do other people typically refer to it? What kind of time window do people typically look at here? And so forth. “All that context has been done before by other people,” Nag said. And generative AI can enable engineers to use natural language prompts to find exactly what they need — and often kick off the next steps in subsequent actions or workflows automatically as well, often without ever leaving Slack ... The cloud native ecosystem is vast (and continually growing) — keeping up with the intricacies of everything is almost impossible. With generative AI, Nag said, no one actually needs to know the ins and outs of dozens of different systems and tools. A user can simply say “scale up this pod by two replicas or configure this Lambda [function] this way. 


Diverse threat intelligence key to cyberdefense against nation-state attacks

Most threat intelligence houses currently originate from the West or are Western-oriented, and this can result in bias or skewed representations of the threat landscape, noted Minhan Lim, head of research and development at Ensign Labs. The Singapore-based cybersecurity vendor was formed through a joint venture between local telco StarHub and state-owned investment firm, Temasek Holdings. "We need to maintain neutrality, so we're careful about where we draw our data feeds," Lim said in an interview with ZDNET. "We have data feeds from all reputable [threat intel] data sources, which is important so we can understand what's happening on a global level." Ensign also runs its own telemetry and SOCs (security operations centers), including in Malaysia and Hong Kong, collecting data from sensors deployed worldwide. Lim added that the vendor's clientele comprises multinational corporations (MNCs), including regional and China-based companies, that have offices in the U.S., Europe, and South Africa.


Where Does Zero Trust Fall Short? Experts Weigh In

The strategy of ZT can be applied to all of those areas and, if done correctly and intelligently, then a solid strategic approach can be beneficial. There is no ZT product that can simply make those areas secure, however. I would also suggest that the largest area of threat is privileged access, as that is the most common avenue of lateral movement and increased compromise historically.” ... “It’s a multifaceted issue when determining the greatest threat among the areas where zero trust falls short. At the core, privileged access stands out as the most alarming vulnerability. These users, often likened to having ‘keys to the kingdom,’ possess the capabilities to access confidential data, modify configurations and undertake actions that could severely jeopardize an organization. “However, an underlying concern that might be overlooked is the reason behind the extensive distribution of privileged access. In many situations, this excessive access stems from challenges tied to legacy systems, IoT devices, third-party services, and emerging technologies and applications. 


Data Management Challenges In Heterogeneous Systems

When you look at the whole chiplet ecosystem, there are certain blocks we feel can be generalized and made into chiplets, or known good die, that can be brought into the market. The secret sauce is custom piece of silicon, and they can design and own the recipe around that. But there are generic components in any SoC — memory, interconnects, processors. You can always fragment it in a way that there are some general components, which you can leverage from the general market, and which will help everyone. That brings the cost of building your system down so you can focus on problems around your secret sauce. ... We need something like a three-tier data management system, where with tier one everyone can access data and share it, and tier three is only for people in a company. But I don’t know when we’ll get there because data management is a real tough problem. ... We may need new approaches. Just looking at this from the hyperscale cloud perspective, which is huge, with complex hardware/software systems and things coming in from many vendors, how do we protect it? 


Companies are already feeling the pressure from upcoming US SEC cyber rules

Calculating the financial ramifications of a cybersecurity incident under the upcoming rules placed pressure on corporate leaders to collaborate more closely with CISOs and other cybersecurity professionals within their organizations. Right now, a "gulf exists between boards and CFOs and their cybersecurity defense teams, their chief information security officers," Gerber says. "The two aren’t speaking the same language yet." Gerber thinks that "what companies and CFOs are realizing is that they need to get their teams into these exercises so that they can practice making their determinations as accurately and clearly as they can and early as they can." "I think that the general counsels and the CISOs have been at arm’s length of each, and I’m going to tell you one extreme," Sanna says. "One CISO told us that their legal or general counsel did not want them to assess cyber risk in financial terms so they could claim ignorance and not have to report it."


A Guide to Data-Driven Design and Architecture

Data-driven architecture involves designing and organizing systems, applications, and infrastructure with a central focus on data as a core element. Within this architectural framework, decisions concerning system design, scalability, processes, and interactions are guided by insights and requirements derived from data. Fundamental principles of data-driven architecture include: Data-centric design – Data is at the core of design decisions, influencing how components interact, how data is processed, and how insights are extracted. Real-time processing – Data-driven architectures often involve real-time or near real-time data processing to enable quick insights and actions. Integration of AI and ML – The architecture may incorporate AI and ML components to extract deeper insights from data. Event-driven approach – Event-driven architecture, where components communicate through events, is often used to manage data flows and interactions.


The Search for Certainty When Spotting Cyberattacks

Exacerbating the problem is the availability of malware and ransomware services for sale on the Dark Web, Taylor said, which can arm bad actors with the means of doing digital harm even if they lack coding skills of their own. That makes it harder to profile and identify specific attackers, he said, because thousands of bad actors might buy the same tools to attack systems. “We can’t identify where it’s coming from very easily,” Taylor said, because almost anybody could be a hacker. “You don’t have to be the expert anymore. You don’t have to be the cyber gang that’s very technically adept at developing all these tools.” That means cyberattacks may be launched from unexpected angles. For example, he said, gangs could outsource their hacking needs via such resources, or individuals who are simply bored at home might pick up such tools from the Dark Web to create phishing campaigns. “It becomes harder and harder to profile the threat.”


How Listening to the Customer Can Boost Innovation

Product development should not rely solely on customer input. Development teams should also take product metrics into account. Most, if not all, SaaS products today track a wealth of product metrics that show how customers use and engage with products. These insights can drive product development and strategy. For example, by providing insights on how individual customers are interacting with products, development teams can see what features customers are and aren’t using, or perhaps struggling with. This can validate whether customer requests to improve certain features are correct. Metrics can also show whether new products or services are performing well and having a positive impact on business outcomes. From a business perspective, you want new services to improve engagement, retention and sentiment, and metrics can show the benefits of listening to the customer by demonstrating how new services are helping to improve revenue growth.



Quote for the day:

"Become the kind of leader that people would follow voluntarily, even if you had no title or position." --Brian Tracy

Daily Tech Digest - September 28, 2023

What is artificial general intelligence really about?

AGI is a hypothetical intelligent agent that can accomplish the same intellectual achievements humans can. It could reason, strategize, plan, use judgment and common sense, and respond to and detect hazards or dangers. This type of artificial intelligence is much more capable than the AI that powers the cameras in our smartphones, drives autonomous vehicles, or completes the complex tasks we see performed by ChatGPT. ... AGI could change our world, advance our society, and solve many of the complex problems humanity faces, to which a solution is far beyond humans' reach. It could even identify problems humans don't even know exist. "If implemented with a view to our greatest challenges, [AGI] can bring pivotal advances in healthcare, improvements to how we address climate change, and developments in education," says Chris Lloyd-Jones, head of open innovation at Avande. ... AGI carries considerable risks, and experts have warned that advancements in AI could cause significant disruptions to humankind. But expert opinions vary on quantifying the risks AGI could pose to society.


How to avoid the 4 main pitfalls of cloud identity management

DevOps and Security teams are often at odds with each other. DevOps wants to ship applications and software as fast and efficiently as possible, while Security’s goal is to slow the process down and make sure bad actors don’t get in. At the end of the day, both sides are right – fast development is useless if it creates misconfigurations or vulnerabilities and security is ineffective if it’s shoved toward the end of the process. Historically, deploying and managing IT infrastructure was a manual process. This setup could take hours or days to configure, and required coordination across multiple teams. (And time is money!) Infrastructure as code (IaC) changes all of that and enables developers to simply write code to deploy the necessary infrastructure. This is music to DevOps ears, but creates additional challenges for security teams. IaC puts infrastructure in the hands of developers, which is great for speed but introduces some potential risks. To remedy this, organizations need to be able to find and fix misconfigurations in IaC to automate testing and policy management.


Why a DevOps approach is crucial to securing containers and Kubernetes

DevOps, which is heavily focused on automation, has significantly accelerated development and delivery processes, making the production cycle lightning fast, leaving traditional security methods lagging behind, Carpenter says. “From a security perspective, the only way we get ahead of that is if we become part of that process,” he says. “Instead of checking everything at the point it’s deployed or after deployment, applying our policies, looking for problems, we embed that into the delivery pipeline and start checking security policy in an automated fashion at the time somebody writes source code, or the time they build a container image or ship that container image, in the same way developers today are very used to, in their pipelines.” It’s “shift left security,” or taking security policies and automating them in the pipeline to unearth problems before they get to production. It has the advantage of speeding up security testing and enables security teams to keep up with the efficient DevOps teams. “The more things we can fix early, the less we have to worry about in production and the more we can find new, emerging issues, more important issues, and we can deal with higher order problems inside the security team,” he says.


Understanding Europe's Cyber Resilience Act and What It Means for You

The act is broader than a typical IoT security standard because it also applies to software that is not embedded. That is to say, it applies to the software you might use on your desktop to interact with your IoT device, rather than just applying to the software on the device itself. Since non-embedded software is where many vulnerabilities take place, this is an important change. A second important change is the requirement for five years of security updates and vulnerability reporting. Few consumers who buy an IoT device expect regular software updates and security patches for that type of time range, but both will be a requirement under the CRA. The third important point of the standard is the requirement for some sort of reporting and alerting system for vulnerabilities so that consumers can report vulnerabilities, see the status of security and software updates for devices, and be warned of any risks. The CRA also requires that manufacturers notify the European Union Agency for Cybersecurity (ENISA) of a vulnerability within 24 hours of discovery. 


Conveying The AI Revolution To The Board: The Role Of The CIO In The Era Of Generative AI

Narratives can be powerful, especially when they’re rooted in reality. By curating a list of businesses that have thrived with or invested in AI—especially those within your sector—and bringing forth their successful integration case studies, you can demonstrate not just possibilities but proven success. It conveys a simple message: If they can, so can we. ... Change, especially one as foundational as AI, can be daunting. Set up a task force to outline the stages of AI implementation, starting with pilot projects. A clear, step-by-step road map demystifies the journey from our current state to an AI-integrated future. It offers a sense of direction by detailing resource allocations, potential milestones and timelines—transforming the AI proposition from a vague idea into a concrete plan. ... In our zeal to champion AI, we mustn’t overlook the ethical considerations it brings. Draft an AI ethics charter, highlighting principles and practices to ensure responsible AI adoption. Addressing issues like data privacy, bias mitigation and the need for transparent algorithms proactively showcases a balanced, responsible approach.


Chip industry strains to meet AI-fueled demands — will smaller LLMs help?

Avivah Litan, a distinguished vice president analyst at research firm Gartner, said sooner or later the scaling of GPU chips will fail to keep up with growth in AI model sizes. “So, continuing to make models bigger and bigger is not a viable option,” she said. iDEAL Semiconductor's Burns agreed, saying, "There will be a need to develop more efficient LLMs and AI solutions, but additional GPU production is an unavoidable part of this equation." "We must also focus on energy needs," he said. "There is a need to keep up in terms of both hardware and data center energy demand. Training an LLM can represent a significant carbon footprint. So we need to see improvements in GPU production, but also in the memory and power semiconductors that must be used to design the AI server that utilizes the GPU." Earlier this month, the world’s largest chipmaker, TSMC, admitted it's facing manufacturing constraints and limited availability of GPUs for AI and HPC applications. 


NoSQL Data Modeling Mistakes that Ruin Performance

Getting your data modeling wrong is one of the easiest ways to ruin your performance. And it’s especially easy to screw this up when you’re working with NoSQL, which (ironically) tends to be used for the most performance-sensitive workloads. NoSQL data modeling might initially appear quite simple: just model your data to suit your application’s access patterns. But in practice, that’s much easier said than done. Fixing data modeling is no fun, but it’s often a necessary evil. If your data modeling is fundamentally inefficient, your performance will suffer once you scale to some tipping point that varies based on your specific workload and deployment. Even if you adopt the fastest database on the most powerful infrastructure, you won’t be able to tap its full potential unless you get your data modeling right. ... How do you address large partitions via data modeling? Basically, it’s time to rethink your primary key. The primary key determines how your data will be distributed across the cluster, which improves performance as well as resource utilization.


AI and customer care: balancing automation and agent performance

AI alone brings real challenges to delivering outstanding customer service and satisfaction. For starters, this technology must be perfect, or it can lead to misunderstandings and errors that frustrate customers. It also lacks the humanised context of empathy and understanding of every customer’s individual and unique needs. A concern we see repeatedly is whether AI will eventually replace human engagement in customer service. Despite the recent advancements in AI technology, I think we can agree it remains increasingly unlikely. Complex issues that arise daily with customers still require human assistance. While AI’s strength lies in dealing with low-touch tasks and making agents more effective and productive, at this point, more nuanced issues still demand the human touch. However, the expectation from AI shouldn’t be to replace humans. Instead, the focus should be on how AI can streamline access to live-agent support and enhance the end-to-end customer care process. 


How to Handle the 3 Most Time-Consuming Data Management Activities

In the context of data replication or migration, data integrity can be compromised, resulting in inconsistencies or discrepancies between the source and target systems. This issue is identified as the second most common challenge faced by data producers, identified by 40% of organizations, according to The State of DataOps report. Replication processes generate redundant copies of data, while migration efforts may inadvertently leave extraneous data in the source system. Consequently, this situation can lead to uncertainty regarding which data version to rely upon and can result in wasteful consumption of storage resources. ... Another factor affecting data availability is the use of multiple cloud service providers and software vendors. Each offers proprietary tools and services for data storage and processing. Organizations that heavily invest in one platform may find it challenging to switch to an alternative due to compatibility issues. Transitioning away from an ecosystem can incur substantial costs and effort for data migration, application reconfiguration, and staff retraining.


The Secret of Protecting Society Against AI: More AI?

One of the areas of greatest concern with generative AI tools is the ease with which deepfakes -- images or recordings that have been convincingly altered and manipulated to misrepresent someone -- can be generated. Whether it is highly personalized emails or texts, audio generated to match the style, pitch, cadence, and appearance of actual employees, or even video crafted to appear indistinguishable from the real thing, phishing is taking on a new face. To combat this, tools, technologies, and processes must evolve to create verifications and validations to ensure that the parties on both ends of a conversation are trusted and validated. One of the methods of creating content with AI is using generative adversarial networks (GAN). With this methodology, two processes -- one called the generator and the other called the discriminator -- work together to generate output that is almost indistinguishable from the real thing. During training and generation, the tools go back and forth between the generator creating output and the discriminator trying to guess whether it is real or synthetic. 



Quote for the day:

''You are the only one who can use your ability. It is an awesome responsibility.'' -- Zig Ziglar

Daily Tech Digest - September 27, 2023

CISOs are struggling to get cybersecurity budgets: Report

"Across industries, the decline in budget growth was most prominent in tech firms, which dropped from 30% to 5% growth YoY," IANS said in a report on the study. "More than a third of organizations froze or cut their cybersecurity budgets." Budget growth was the lowest in sectors that are relatively mature in cybersecurity, such as retail, tech, finance, and healthcare, added the report. ... Of the CISOs whose companies did increase cybersecurity budgets, 80% indicated extreme circumstances, such as a security incident or a major industry disruption, drove the budget increase. While companies impacted by a cybersecurity breach added 18% to their budget on average, other industry disruptions contributed to a 27% budget boost. "I think there has always been a component of security spending that is forced to be reactive: be it incidents, updated regulatory or vendor controls or shifting business priorities," Steffen said. "To some degree, technology spending in general has always been like this, and will always likely be this way."


Lifelong Machine Learning: Machines Teaching Other Machines

Lifelong learning is a relatively new field in machine learning, where AI agents are learning continually as they come across new tasks. The goal of LL is for agents to acquire new knowledge of novel tasks, without forgetting how to perform previous tasks. This approach is different from the typical “train-then-deploy” machine learning, where agents cannot learn progressively without “catastrophic interference” (also called catastrophic forgetting) happening in future tasks, where the AI abruptly and drastically forgets previously learned information upon learning new information. According to the team, their work represents a potentially new direction in the field of lifelong machine learning, as current work in LL involves getting a single AI agent to learn tasks one step at a time in a sequential way. In contrast, SKILL involves a multitude of AI agents all learning at the same time in a parallel way, thus significantly accelerating the learning process. The team’s findings demonstrate when SKILL is used, the amount of time that is required to learn all 102 tasks is reduced by a factor of 101.5 


Is Your Organization Vulnerable to Shadow AI?

Perhaps the biggest danger associated with unaddressed shadow AI is that sensitive enterprise data could fall into the wrong hands. This poses a significant risk to privacy and confidentiality, cautions Larry Kinkaid a consulting manager at BARR Advisory, a cybersecurity and compliance solutions provider. “The data could be used to train AI models that are commingled, or worse, public, giving bad actors access to sensitive information that could be used to compromise your company’s network or services.” There could also be serious financial repercussions if the data is subject to legal, statutory, or regulatory protections, he adds. Organizations dedicated to responsible AI deployment and use follow strong, explainable, ethical, and auditable practices, Zoldi says. “Together, such practices form the basis for a responsible AI governance framework.” Shadow AI occurs out of sight and beyond AI governance guardrails. When used to make decisions or impact business processes, it usually doesn’t meet even basic governance standards. “Such AI is ungoverned, which could make its use unethical, unstable, and unsafe, creating unknown risks,” he warns.


Been there, doing that: How corporate and investment banks are tackling gen AI

In new product development, banks are using gen AI to accelerate software delivery using so-called code assistants. These tools can help with code translation (for example, .NET to Java), and bug detection and repair. They can also improve legacy code, rewriting it to make it more readable and testable; they can also document the results. Plenty of financial institutions could benefit. Exchanges and information providers, payments companies, and hedge funds regularly release code; in our experience, these heavy users could cut time to market in half for many code releases. For many banks that have long been pondering an overhaul of their technology stack, the new speed and productivity afforded by gen AI means the economics have changed. Consider securities services, where low margins have meant that legacy technology has been more neglected than loved; now, tech stack upgrades could be in the cards. Even in critical domains such as clearing systems, gen AI could yield significant reductions in time and rework efforts.


Microsoft’s data centers are going nuclear

The software giant is already working with at least one third-party nuclear energy provider in an effort to reduce its carbon footprint. The ad, though, signals an effort to make nuclear energy an important part of its energy strategy. The posting said that the new nuclear expert “will maintain a clear and adaptable roadmap for the technology’s integration,” and have “experience in the energy industry and a deep understanding of nuclear technologies and regulatory affairs.” Microsoft has made no public statement on the specific goals of its nuclear energy program, but the obvious possibility — particularly in the wake of its third-party nuclear enegry deal — is a concern for environmental issues. Although nuclear power has long been plagued by serious concerns about its safety and role in nuclear weapons proliferation, the rapidly worsening climate situation makes it a comparatively attractive alternative to fossil fuels, given the relatively large amount of energy it that can be generated without producing atmospheric emissions.


The pitfalls of neglecting security ownership at the design stage

Without clear ownership of security during the design stage, many problems can quickly arise. Security should never be an afterthought, or a ‘bolted on’ mechanism after a product is created. Development teams primarily focus on creating functional and efficient software and hardware, whereas security teams specialize in identifying and mitigating potential risks. Without collaboration, or more ideally integration between the two, security may be overlooked or not adequately addressed, leaving a heightened risk for cyber vulnerabilities. A good example is a privacy shutter for cameras in laptop computers. Ever see a sticky note on someone’s PC covering the camera? A design team may focus on the quality and placement of the camera as primarily factors for the user experience. However, security professionals know that many users want a physical solution to guarantee cameras cannot capture images if they don’t want to, and on/off indicating lights are not good enough.


Enterprise Architecture Must Adapt for Continuous Business Change

Continuous business change is an agile enterprise mindset that begins with the realization that change is constant and that business needs to be organized to support this continual change. This change is delivered as a constant flow of activity directed by distributed teams and democratized processes. It is orchestrated by the transparency of information and includes automated monitoring and workflows. This continuous business change requires EA, as a discipline, to evolve to match the new mindset. Change processes need to be adapted and updated to deliver faster time to value and quicker iteration of business ideas. These adaptations require the democratization of design, away from a traditional centralized approach, to allow for a quicker and more efficient change process. These change processes recognize autonomous business areas that deliver their own change. One example of this is moving away from being project-focused to being product-focused. Product-based companies organize their teams around autonomous products which may also be known as value streams or bounded domains.


A history of online payment security

Google was the first site to use two-factor authentication. They made it so that those requesting access were required to have not only a password, but access to the phone number used when creating the account. Since then, many companies have taken this system to the next level by providing their users with a multitude of ways to ensure the security of their online payments. They have implemented multiple ways to ensure the safety of their clients’ transactions, including password security, a six digit PIN, account security tokens and SMS validation. Other than a DNA match, you can’t get much more verified than this. Privacy and confidentiality of information, especially when it concerns financial data, is detrimental to customer satisfaction. There are millions of financial transactions done online on a daily basis involving payments to online shopping websites or merchant stores, bill payments or bank transactions. Security of cashless transactions done on a virtual platform requires an element of bankability and trust that can only be generated from the best and most reputable brands and leaders in the industry.


Rediscovering the value of information

In the corporate sector, the value destroyed by poor information management practices is often measured in fines and lawsuit payouts. But before such catastrophes come to light, what metrics do we use — or should we use — to determine whether a publicly traded company has their information management house in order? Who manages information more effectively — P&G or Unilever; Coke or Pepsi; GM or Ford; McDonald’s or Chipotle; Marriot or Hilton? When interviewing a potential new hire, how should we ascertain whether they are a skilled and responsible information manager? Business historians tell us that it was about 10 years before the turn of the century that “information” — previously thought to be a universal “good thing” — started being perceived as a problem. About 20 years after the invention of the personal computer, the general population started to feel overwhelmed by the amount of information being generated. We thrive on information, we depend on information, and yet we can also choke on it. We have available to us more information than one person could ever hope to process.


Software Delivery Enablement, Not Developer Productivity

Software delivery enablement and 2023’s trend of platform engineering won’t succeed by focusing solely on people and technology. At most companies, processes need an overhaul too. A team has “either a domain that they’re working in or they have a piece of functionality that they have to deliver,” she said. “Are they working together to deliver that thing? And, if not, what do we have to do to improve that?” Developer enablement should be concentrated at the team outcome level, says Daugherty, which can be positively influenced by four key capabilities: Continuous integration and continuous delivery (CI/CD); Automation and Infrastructure as Code (IaC); Integrated testing and security; Immediate feedback. “Accelerate,” the iconic, metrics-centric guide to DevOps and scaling high-performing teams, has found certain decisions that are proven to help teams speed up delivery. One is that when teams are empowered to choose which tools they use, this is proven to improve performance. 



Quote for the day:

“Success is actually a short race - a sprint fueled by discipline just long enough for habit to kick in and take over.” -- Gary W. Keller