Daily Tech Digest - January 31, 2023

Microsoft says cloud demand waning, plans to infuse AI into products

Microsoft Azure and other cloud services grew 38% in constant currency terms on a year-on-year basis, slowing down by 4% from the previous sequential quarter. “As I noted earlier, we exited Q2 with Azure growth in the mid-30s in constant currency. And from that, we expect Q3 growth to decelerate roughly four to five points in constant currency,” Amy Hood, chief financial officer at Microsoft, said during an earnings call. The growth in cloud number is expected to slow down further through the year, warned Microsoft Chief Executive Satya Nadella. “As I meet with customers and partners, a few things are increasingly clear. Just as we saw customers accelerate their digital spend during the pandemic, we are now seeing them optimize that spend,” Nadella said during the earnings call, adding that enterprises were exercising caution in spending on cloud. Explaining further about enterprises optimizing their spend, Nadella said that enterprises wanted to get the maximum return on their investment and save expenses to put into new workloads.


Why Software Talent Is Still in Demand Despite Tech Layoffs, Downturn and a Potential Recession

We live in a world run by software programs. With increasing digitization, there will always be a demand for software solutions. In particular, software developers are in high demand within the tech industry. In the age of data, firms need software developers who will analyze the data to create software solutions. They will also use the data to understand user needs, monitor performance and modify the programs accordingly. Software developers have skills that prove them valuable in many industries. As long as an industry needs software solutions, a developer can provide and customize them to the firms that need them. ... Many tech workers suffered a terrible blow in 2022. Their prestigious jobs at giant tech firms vanished, leaving many stranded and confused. However, there is still a significant demand for tech professionals in our technological world, particularly software developers. Software development is the bedrock of the tech industry. Software engineers with valuable skill sets, experience and drive will quickly find other positions and opportunities. 


Cybercrime Ecosystem Spawns Lucrative Underground Gig Economy

Improving defenses have forced attackers to improve their tools and techniques, driving the need for more technical specialists, explains Polina Bochkareva, a security services analyst at Kaspersky. "Business related to illegal activities is growing on underground markets, and technologies are developing along with it," she says. "All this leads to the fact that attacks are also developing, which requires more skilled workers." The underground jobs data highlights the surge in activity in cybercriminal services and the professionalization of the cybercrime ecosystem. Ransomware groups have become much more efficient as they have turned specific facets of operations into services, such as offering ransomware-as-a-service (RaaS), running bug bounties, and creating sales teams, according to a December report. In addition, initial access brokers have productized the opportunistic compromise of enterprise networks and systems, often selling that access to ransomware groups. Such division of labor requires technically skilled people to develop and support the complex features, the Kaspersky report stated.


3 ways to stop cybersecurity concerns from hindering utility infrastructure modernization efforts

Cybersecurity is a priority across industries and borders, but several factors add to the complexity of the unique environment in which utilities operate. Along with a constant barrage of attacks, as a regulated industry, utilities face several new compliance and reporting mandates, such as the Cyber Incident Reporting for Critical Infrastructure Act of 2022 (CIRCIA). Other security considerations include aging OT, which can be challenging to update and to protect, the lack of control over third-party technologies and IoT devices such as smart home devices and solar panels, and finally, the biggest threat of all: human error. These risk factors put extra pressure on utilities, as one successful attack can have deadly consequences. The instance of a hacker attempting to poison (thankfully unsuccessfully) the water supply in Oldsmar, Florida is one example that comes to mind. Utilities have a lot to contend with even before adding data analytics into the mix. However, it is interesting to point out that consumers are significantly less worried about the privacy of data collected by utilities. 


Why cybersecurity teams are central to organizational trust

No business is an island; it depends on many partners (whether formal business partners or some other relationship) – a fact highlighted by the widespread supply chain challenges across many industries over the past couple of years. The security of software supply chains – which is to say, dependencies on upstream libraries and other code used by organizations in their software – is a topic of considerable focus today up to and including from the U.S. executive branch. It’s still arguably not getting the attention it deserves, though. The aforementioned 2023 Global Tech Outlook report found that, among the funding priorities within security, third-party or supply chain risk management came in at the very bottom, with just 12 percent of survey respondents saying it was a top priority. Deb Golden, who leads Deloitte’s U.S. Cyber and Strategic Risk practice, told the authors that there needs to be more scrutiny over supply chains. “Organizations are accountable for safeguarding information and share a responsibility to respond and manage broader network threats in near real-time,” she said. 


Global Microsoft cloud-service outage traced to rapid BGP router updates

The withdrawal of BGP routes prior to the outage appeared largely to impact direct peers, ThousandEyes said. With a direct path unavailable during the withdrawal periods, the next best available path would have been through a transit provider. Once direct paths were readvertised, the BGP best-path selection algorithm would have chosen the shortest path, resulting in a reversion to the original route. These re-advertisements repeated several times, causing significant route-table instability. “This was rapidly changing, causing a lot of churn in the global internet routing tables,” said Kemal Sanjta, principal internet analyst at ThousandEyes, in a webcast analysis of the Microsoft outage. “As a result, we can see that a lot of routers were executing best path selection algorithm, which is not really a cheap operation from a power-consumption perspective.” More importantly, the routing changes caused significant packet loss, leaving customers unable to reach Microsoft Teams, Outlook, SharePoint, and other applications. 


New analog quantum computers to solve previously unsolvable problems

The essential idea of these analog devices, Goldhaber-Gordon said, is to build a kind of hardware analogy to the problem you want to solve, rather than writing some computer code for a programmable digital computer. For example, say that you wanted to predict the motions of the planets in the night sky and the timing of eclipses. You could do that by constructing a mechanical model of the solar system, where someone turns a crank, and rotating interlocking gears represent the motion of the moon and planets. In fact, such a mechanism was discovered in an ancient shipwreck off the coast of a Greek island dating back more than 2000 years. This device can be seen as a very early analog computer. Not to be sniffed at, analog machines were used even into the late 20th century for mathematical calculations that were too hard for the most advanced digital computers at the time. But to solve quantum physics problems, the devices need to involve quantum components. The new Quantum 


Will Your Company Be Fined in the New Data Privacy Landscape?

“Some large US companies are continuing to be dealt pretty significant fines,” she says. “The regulation and fining of companies like Meta and others have raised consumer awareness of privacy rights. I think we’re approaching a perfect storm in the US where the rest of the world is moving toward a more consumer-protective landscape, so the US is following in suit.” This includes activity by state policymakers as well as responses to cybersecurity breaches, Simberkoff says. She sees the conversation on data privacy being driven by increasingly complex regulatory requirements and consumer awareness of data privacy, which can include identity theft or stolen credit card information. “I think, frankly, companies like Apple help that dialogue forward because they’ve made privacy one of their key issues in advertising,” says Simberkoff. The elevation of data privacy policies and consumer awareness might, at first blush, seem detrimental to data-driven businesses, but it could just require new operational approaches. “I think what we’re going to end up seeing is a different way of thinking about these things,” she says. 


What is the role of a CTO in a start-up?

The role of the CTO in a start-up can vary greatly from an equivalent position in a more established scale-up business. While in both scenarios the position concerns leadership of all technological decisions within a business, there are considerable differences in the focus and nature of the role. “Start-ups tend to be disruptive and faced-paced, with the goal of quick growth over long-term strategy development. So, start-up CTOs are often responsible for building the technological infrastructure from the ground up,” said Ryan Jones, co-founder of OnlyDataJobs. “Whereas in an established company, a CTO might be responsible for reviewing and improving the current technology stack and data infrastructure, in a start-up, these structures might not exist. So, the onus is on the CTO to create and implement an entire technological infrastructure and strategy. This also means that a hands-on approach is required. “Because start-up CTOs may be the only technologically minded individual within the company, they’re often required to go back on the tools and do the actual work required themselves rather than delegating to a team.”


Your Tech Stack Doesn’t Do What Everyone Needs It To. What Next?

IT needs to collaborate with citizen developers throughout the process to ensure maximum safety and efficiency. From the beginning, it’s important to confirm the team’s overall approach, select the right tools, establish roles, set goals, and discuss when citizen developers should ask for support from IT. Appointing a leader for the citizen developer program is a great way to help enforce these policies and hold the team accountable for meeting agreed-upon milestones. To encourage collaboration and make citizen automation a daily practice, it’s important to work continuously to identify pain points and manual work within business processes that can be automated. IT should regularly communicate with teams across the business, finance and HR departments to find opportunities for automation, clearly mapping out what change would look like for those impacted. Gaining buy-in from other team leaders is critical, so citizen developers and IT need to become internal advocates for the benefits of automation.Another non-negotiable ground rule is that citizen developers should only use IT-sanctioned tools platforms. 



Quote for the day:

"If a window of opportunity appears, don't pull down the shade." -- Tom Peters

Daily Tech Digest - January 30, 2023

How to survive below the cybersecurity poverty line

All types of businesses and sectors can fall below the cybersecurity poverty line for different reasons, but generally, healthcare, start-ups, small- and medium-size enterprises (SMEs), education, local governments, and industrial companies all tend to struggle the most with cybersecurity poverty, says Alex Applegate ... These include wide, cumbersome, and outdated networks in healthcare, small IT departments and immature IT processes in smaller companies/start-ups, vast network requirements in educational institutions, statutory obligations and limitations on budget use in local governments, and custom software built around specific functionality and configurations in industrial businesses, he adds. Critical National Infrastructure (CNI) firms and charities also commonly find themselves below the cybersecurity poverty line, for similar reasons. The University of Portsmouth Cybercrime Awareness Clinic’s work with SMEs for the UK National Cyber Security Centre (NCSC) revealed that cybersecurity was a secondary issue for most micro and small businesses it engaged with, evidence that it is often the smallest companies that find themselves below the poverty line, Karagiannopoulos says.


The Importance of Testing in Continuous Deployment

Test engineers are usually perfectionists (I speak from my experience), that’s why it’s difficult for them to take a risk of issues possibly reaching end users. This approach has a hefty price tag and impacts the speed of delivery, but it’s acceptable if you deliver only once or twice per month. The correct approach would be automating critical paths in application both from a business perspective and application reliability. Everything else can go to production without thorough testing because with continuous deployment, you can fix issues within hours or minutes. For example, if item sorting and filtering stops working in production, users might complain, but the development team could fix this issue quickly. Would it impact business? Probably not. Would you lose a customer? Probably not. These are the risks that should be OK to take if you can quickly fix issues in production. Of course, it all depends on the context – if you’re providing document storing services for legal investigations, it would be a good idea to have an automated test for sorting and filtering.


Why Trust and Autonomy Matter for Cloud Optimization

With organizations beginning to ask teams to do more with less, optimization — of all kinds — is going to become a vital part of what technology teams (development and operations alike) have to do. But for that to be really effective, team autonomy also needs to be founded on confidence — you need to know that what you’re investing time, energy and money on makes sense from the perspective of the organization’s wider goals. Fortunately, Spot can help here too. It gives teams the data they need to make decisions about automation, so they can prioritize according to what matters most from a strategic perspective. “People aren’t really sure what’s going to be happening six, nine, 10 months down the road.” Harris says. “Making it easier for people to get that actionable data no matter what part of the business you’re in, so that you can go in and you can say, ‘Here’s what we’re doing right, here’s where we can optimize’ — that’s a big focus for us.” One of the ways that Spot enables greater autonomy is with automation features. 


Keys to successful M&A technology integration

For large organisations merging together, unifying networks and technologies may take years. But for SMBs (small and medium-sized businesses) utilising more traditional technologies uch as VPNs, integrations may be accomplished more quickly and with less friction. In scenarios where both the acquiring company and the company being acquired utilise more sophisticated SD-WAN networks, these technologies tend to be closed and proprietary in nature. Therefore, if both companies utilise the same vendor, integration can be managed more easily. On the other hand, if the vendors differ, it is not going to interlink with other networks as easily and needs a more careful step-by-step network transformation plan. ... Another key to a successful technology merger is to truly understand where your applications are going. For example, if two New York companies are joining forces, with most of the data and applications residing in the US East Coast, it wouldn’t make sense to interconnect networks in San Francisco. Along with this, it is important to make sure your regional networks are strong, even within your global network. In terms of where you are sending your traffic and data, it’s important to be as efficient as possible.


Understanding service mesh?

Service meshes don’t give an application’s runtime environment any additional features. Service meshes are unique in that they abstract the logic governing service-to-service communication to an infrastructure layer. This is accomplished by integrating a service mesh as a collection of network proxies into an application. proxies are frequently used to access websites. Typically, a company’s web proxy receives requests for a web page and evaluates them for security flaws before sending them on to the host server. Prior to returning to the user, responses from the page are also forwarded to the proxy for security checks. ... But service mesh is an essential management system that helps all the different containers to work in harmony. Here are several reasons why you will want to implement service mesh in an orchestration framework environment. In a typical orchestration framework environment, user requests are fulfilled through a series of steps, where each of the steps is performed by a container Each one runs a service that plays a different but vital role in fulfilling that request. Let us call this role played by each container a business logic.


Chaos Engineering: Benefits of Building a Test Strategy

Many organizations struggle to get visibility into where their most sensitive data is stored. Improper handling of that data can have disastrous consequences, such as compliance violations or trade secrets falling into the wrong hands. “Using chaos engineering could help identify vulnerabilities that, unless remediated, could be exploited by bad actors within minutes,” Benjamin says. Kelly Shortridge, senior principal of product technology at Fastly, says organizations can use chaos engineering to generate evidence of their systems’ resilience against adverse scenarios, like attacks. “By conducting experiments, you can proactively understand how failure unfolds, rather than waiting for a real incident to occur,” she says. The very nature of experiments requires curiosity -- the willingness to learn from evidence -- and flexibility so changes can be implemented based on that evidence. “Adopting security chaos engineering helps us move from a reactive posture, where security tries to prevent all attacks from ever happening, to a proactive one in which we try to minimize incident impact and continuously adapt to attacks,” she notes.


How to get buy-in on new technology: 3 tips

When making a case for new technology, keep your audience in mind. Tailoring your arguments to their role and goals will put you in a much better position to capture their attention and generate enthusiasm. Sometimes this will require you to shift away from strict business goals. If you need to speak with the chief revenue officer and are trying to justify an additional $100,000 for your tech stack, for example, you will need to focus on the bottom line and the financial benefit your proposal could provide. On the other hand, the head of engineering might not be interested in the finances and would rather discuss how engineers can better avoid burnout or otherwise become easier to manage. When advocating for stack improvements, working with a partner helps substantially. It’s good to have a boss or teammate help, but even better to find a leader on a different team or even in another department. If multiple departments have team members who champion a specific improvement, it makes a strong case that there’s a pervasive need for stack enhancements across the entire company.


How organizations can keep themselves secure whilst cutting IT spending

The zero trust network access model has been a major talking point for CIOs, CISOs and IT professionals for some time. While most organizations do not fully understand what zero trust is, they recognize the importance of the initiative. Enforcing principles of least privilege minimizes the impact of an attack. In a zero trust model, an organization can authorize access in real-time based on information about the account they have collected over time. To make such informed decisions, security teams need accurate and up-to-date user profiles. Without it, security teams can’t be 100% confident that the user gaining access to a critical resource isn’t a threat. However, with the sprawl of identity data – stored in the cloud and legacy systems – of which are unable to communicate with each other, such decisions cannot be made accurately. Ultimately, the issue of identity management isn’t only getting more challenging with the digitalization of IT and migration to the cloud – it’s now also halting essential security projects such as zero trust implementation.


Economic headwinds could deepen the cybersecurity skills shortage

Look at anyone’s research and you’ll see that more organizations are turning to managed services to augment overburdened and under-skilled internal security staff. For example, recent ESG research on security operations indicates that 85% of organizations use some type of managed detection and response (MDR) service, and 88% plan to increase their use of managed services in the future. As this pattern continues, managed security service providers (MSSPs) will need to add headcount to handle increasing demand. Since service provider business models are based on scaling operations through automation, they will calculate a higher return on employee productivity and be willing to offer more generous compensation than typical organizations. One aggressive security services firm in a small city could easily gain a near monopoly on local talent. At the executive level, we will also see increasing demand for the services of virtual CISOs (vCISOs) to create and manage security programs in the near term.


2023 Will Be the Year FinOps Shifts Left Toward Engineering

By enabling developers to adopt using dynamic logs for troubleshooting issues in production without the need to redeploy and add more costly logs and telemetry, developers can own the FinOps cost optimization responsibility earlier in the development cycle and shorten the cost feedback loop. Dynamic logs and developer native observability that are triggered from the developer development environment (IDE) can be an actionable method to cut overall costs and better facilitate cross-team collaboration, which is one of the core principles of FinOps. “FinOps will become more of an engineering problem than it was in the past, where engineering teams had fairly free reign on cloud consumption. You will see FinOps information shift closer to the developer and end up part of pull-request infrastructure down the line,” says Chris Aniszczyk, CTO at the Cloud Native Computing Foundation. Keep in mind that it’s not always easy to prioritize and decide when to pull the cost optimization trigger. 



Quote for the day:

"Inspired leaders move a business beyond problems into opportunities." -- Dr. Abraham Zaleznik

Daily Tech Digest - January 29, 2023

Data Mesh Architecture Benefits and Challenges

Data mesh architectures can help businesses find quick solutions to day-to-day problems, discover better ways to manage their resources, and develop more agile business models. Here is a quick review of data mesh architecture benefits: The data mesh architecture is adaptable, in the sense that it can adapt to changes as the company scales, changes, and grows: The data mesh enables data from disparate systems to be collected, integrated, and analyzed all at once, thus eliminating the need to extract data from disparate systems in one central location for further processing; Within a data mesh, the individual domain becomes a mini-enterprise and gains the power to self-manage and serve on all aspects of its Data Science and data processing projects; A data mesh architecture allows companies to increase efficiency by eliminating the data flow in a single pipeline, while protecting the system through centralized monitoring infrastructure; The domain teams can design and develop their need-specific, analytics, and operational use cases while maintaining full control of all their data products and services.


Uncovering the Value of Data & Analytics: Transformation With Targeted Reporting

Most of the time, (Cloud) Data & Analytics transformations are initially approved for implementation based on a solid business case with clear return expectations. However, programs often don’t have a functioning value framework to report on the business value generated from change and the progress toward the initial expectations. In such cases, the transformation impact for executives and business leaders is a “black box” with no clear indication of direction. As time passes and the costs associated with transformation programs increase due to scaling, an insufficient Value Reporting Framework can lead to loss in executive buy-in and reduction of investment budgets. Furthermore, with high market volatility, initiatives without a tangible influence on the company’s bottom line tend to be deprioritized quickly. On the more positive side, a high number of companies have robust value scorecards to track their transformation performance. However, metrics in these scorecards tend to be either too operational for executives to easily digest or focus exclusively on cost aspects. 


Elevating Security Alert Management Using Automation

Context — every security analyst says they need it, but everyone seems to have a different definition for it. If you’ve ever worked an alert queue and thought to yourself, “I wish I could stop these alerts from appearing right now” or “Why am I looking at activity that someone else is already triaging,” then this section is for you — within the first two weeks of deployment, this feature of the system reduced our alert volume by 25%, saving 3 to 4.5 hours of manual effort. In our alert management system, “context” is information derived from the alert payload that is used as metadata for suppression¹, deduplication², and metrics. Reduction of toil in the system is primarily attributed to its ability to use context to stop wasteful alerts from getting to the team. This creates the opportunity for the team to, for example, suppress alerts that we know require tuning by a detection engineer or ignore duplicate alerts for activity that is being investigated but may be on hold while we wait for additional information. These alerts are never dropped — they still flow through the rest of the system and generate a ticket — but they are not assigned to a person for triage.


Could A Data Breach Land Your CISO In Prison?

Why would a CISO worry about personally facing legal consequences for company cybersecurity decisions? I don’t have direct knowledge of Kissner’s motives. However, I do know that for the last several months CISOs have been talking to each other about how last October, a federal jury convicted the CISO of a major U.S company for covering up a data breach. The jury found Joe Sullivan, a former Chief Security Officer, guilty of obstructing justice and actively failing to report a felony—charges stemming from “bug bounty” payments he authorized to hackers who breached the company in 2016. The company was already responding to an investigation into a 2014 breach but did not inform the FTC about the new breach in 2016. Sullivan didn’t make that decision alone: others in the company were looped in, including then-CEO Travis Kalanick, the Chief Privacy Officer, and the company’s in-house privacy/security lawyer. Nevertheless, Sullivan was the only employee to face charges. How might CISOs handle their roles differently in a world where a poorly-handled breach won’t just get you fired—it might land you in prison?


The new age of exploration: Staking a claim in the metaverse

Spatial ownership is the essential concept that makes possible an open metaverse and 3D digital twin of the earth that is not built or controlled by a monopolistic entity. Spatial ownership enables users to own virtual land in the metaverse. It uses non-fungible tokens (NFTs), which represent a unique digital asset that can only have one official owner at a time and can’t be forged or modified. In the metaverse, users can buy NFTs linked to particular parcels of land that represent their ownership of these “properties.” Spatial ownership in the metaverse can be compared to purchasing web domains on today’s internet. As with physical real estate, some speculatively buy web domains hoping to sell the rights to a potentially popular or unique URL at a future date. In contrast, others purchase to lock down control and ownership over their own little portion of the web. Domains are similar to prime real estate in that almost every business needs one, and many brands will look for the same or similar names. The perfect domain name can help a business monopolize its market and get the lion’s share of web visibility in its niche.


Empowering Leadership in a VUCA World

The term VUCA (volatility, uncertainty, complexity, and ambiguity) aptly applies to the world we live in. Making business decisions has become incredibly complex, and we’re not just making traditional budget and managerial decisions. More than ever, leaders have to consider community impact, employee wellbeing, and business continuity under an extraordinary uncertainty. There are so many considerations for even the smallest decisions we make. The highly distributed nature of how people work today means we have to consider a broader potential impact of every statement and every choice. Leaders have the responsibility to think about equity when some employees are sitting in the room with you and others are remote. How much face time are you giving each? Are you treating instant messages the with the same level of attention as someone dropping into your office? This situation is not likely to be any less of a challenge for future leaders. It’s our responsibility as leaders, as people who impact the future of our businesses, to give all the people in our organizations an equal opportunity to contribute and grow. 


Using Artificial Intelligence To Tame Quantum Systems

Quantum computing has the potential to revolutionize the world by enabling high computing speeds and reformatting cryptographic techniques. That is why many research institutes and big-tech companies such as Google and IBM are investing a lot of resources in developing such technologies. But to enable this, researchers must achieve complete control over the operation of such quantum systems at very high speed, so that the effects of noise and damping can be eliminated. “In order to stabilize a quantum system, control pulses must be fast – and our artificial intelligence controllers have shown the promise to achieve such a feat,” Dr. Sarma said. “Thus, our proposed method of quantum control using an AI controller could provide a breakthrough in the field of high-speed quantum computing, and it might be a first step to achieving quantum machines that are self-driving, similar to self-driving cars. We are hopeful that such methods will attract many quantum researchers for future technological developments.”


Avoid a Wipeout: How To Protect Organisations From Wiper Malware

A 3-2-1-1 data-protection strategy is a best practice for defending against malware, including wiper attacks. This strategy entails maintaining three copies of your data, on two different media types, with one copy stored offsite. The final 1 in the equation is immutable object storage. By maintaining multiple copies of data, organisations will have backup available in case one copy is lost or corrupted. It is imperative in the event of a wiper attack, which destroys or erases data. Storing data on different media types also helps protect against wiper attacks. This way, if one type of media is compromised, you still have access to your data through the other copies. Keeping at least one copy of your data offsite, either in a physical location or in the cloud, provides an additional layer of protection. If a wiper attack destroys on-site copies of your data, you’ll still have access to your offsite backup. The final advantage is immutable object storage. Immutable object storage involves continuously taking snapshots of your data every 90 seconds, ensuring that you can quickly recover it even during a wiper attack.


How to use Microsoft KQL for SIEM insight

While KQL is easy to work with, you won’t get good results if you don’t understand the structure of your data. First, you need to know the names of all of the tables used in Sentinel’s workspace. These are needed to specify where you’re getting data from, with modifiers to take only a set number of rows and to limit how much data is returned. This data then needs to be sorted, with the option of taking only the latest results. Next, the data can be filtered, so for example, you’re only getting data from a specific IP range or for a set time period. Once data has been selected and filtered, it’s summarized. This creates a new table with only the data you’ve filtered and only in the columns you’ve chosen. Columns can be renamed as needed and can even be the product of KQL functions — for example summing data or using the maximum and minimum values for the data. The available functions include basic statistical operations, so you can use your queries to look for significant data — a useful tool when hunting suspected intrusions through gigabytes of logs. 


Leaders anticipate cyber-catastrophe in 2023, report World Economic Forum, Accenture

“I think we may see a significant event in the next year, and it will be one in the ICS/OT technologies space. Due to long life, lack of security by design (due in many cases to age) and difficulty to patch, in mission critical areas — an attack in this space would have immense effects that will be felt,” France said. “So I somewhat agree with the hypothesis of the report and the contributors to the survey. You could already argue that we have seen a moderate attack with UK Royal Mail, where ransomware stopped the sending of international parcels for a week or more,” France said. France argues that organizations can insulate themselves from these threats by putting more resources into defensive measures and by treating cybersecurity as a board issue. Key steps include Implementing responsive measures, providing employees with exercises on how to react, implementing recovery plans, planning for supply chain instability and looking for alternative vendors who can provide critical services in the event of a disruption.



Quote for the day:

“If we wait until we’re ready, we’ll be waiting for the rest of our lives.” -- Lemony Snicket

Daily Tech Digest - January 28, 2023

6 tips for making the most of a tight IT budget

Having a clear vison of where you are and where you are going helps to put everything into perspective. As Madhumita Mazumder, GM-ICT at Australian tourism company Journey Beyond says, “If we have a proper strategic plan for the IT department that is aligned with the organization’s vision, we can achieve things within the budget and deal with half the problems that could arise six months or a year down the line.” Giving an example of this approach, Mazumdar says, “We have got absolute clarity on pursuing a cloud-first strategy. The vision of having 100% cloud infrastructure enabled us to significantly reduce our third-party data center costs as we migrated it into our cloud environment.” Similarly, Mazumdar is clear on the outsourcing versus insourcing debate. “I am a big fan of insourcing and support developing a team to take things in-house. For instance, having an in-house team ensures 100% patching of all my network devices on time. Patching happens at odd hours when the business isn’t operating. 


The developer role is changing radically, and these figures show how

"Today, developers are no longer just people building software for technology companies. They're an increasingly diverse and global group of people working across industries, tinkering with code, design, and docs in their free time, contributing to open source projects, conducting scientific research, and more," writes Dohmke. Also, the world's developers are no longer so highly concentrated in the US. GitHub has about 17 million users in the US, which is still its largest user base, but the service predicts India -- whose GitHub developer population stands at 10 million today -- will surpass the US by 2025. "They're people working around the world to build software for hospitals, filmmaking, NASA, and the PyTorch project, which powers AI and machine learning applications. They're also people who want to help a loved one communicate and family members overcome illnesses," Dohmke notes. On top of this, Microsoft's multi-billion dollar investment in OpenAI is helping to attract new developers via services such as its paired programming coding assistant GitHub Copilot, which uses OpenAI's Codex to suggest coding solutions. 


Attackers move away from Office macros to LNK files for malware delivery

LNK abuse has been growing since last year, according to researchers from Cisco Talos, who have seen several attacker groups pivoting to it. One of those groups is behind the long-running Qakbot (also known as Qbot or Pinkslipbot) malware family. "​​Qakbot is known to evolve and adapt their operation according to the current popular delivery methods and defense techniques," the researchers said in a new report. "As recently as May 2022, their preferred method of distribution was to hijack email threads gathered from compromised machines, and insert attachments containing Office XLSB documents embedded with malicious macros. However, after Microsoft announced changes to how macros were executed by default on internet downloaded content, Talos found Qakbot increasingly moving away from the XLSB files in favor of ISO files containing a LNK file, which would download and execute the payload." However, LNK files have a lot of sections and contain a lot of metadata about the machines that generated them, leaving unique traces that can be associated with certain attack campaigns or attacker groups.


What developers should do during a downturn

Particularly if you have worked only at one company for a long while, it may be time to “upskill” yourself. There are more ways than ever to do this. While there are a lot of expensive so-called boot camps, these are lean times, and frankly, some of them are predatory. Consider self-study using MOOCs like Coursera, Udemy, Saylor, and EdX. These have university-style courses that are free or low-cost. If you are early in your career, you can now get a certification or even a bachelor’s degree in computer science entirely online. Both the University of London and BITS Pilani offer bachelor’s programs on Coursera. (A number of other schools offer master’s programs.) However, MOOCs are not the only game in town. Your local university is also getting in on the game and may offer completely online courses. Having done this recently, my advice is not to bother with a formal degree if you are already a seasoned professional, unless you are switching fields. Universities have a lot of “money” courses they make you take in which you fulfill “requirements” but in which you learn nothing of any value whatsoever. 


How Smarter Data Management Can Relieve Tech Supply Chain Woes

As data volumes have exploded in the last few years – primarily from growing unstructured data volumes such as user files, video, sensor data and images – it is no longer viable to have a one-size-fits-all data management strategy. Data needs will always change unpredictably over time. Organizations need a way to continually monitor data usage and move data sets to where they need to be at the right time. This is not just a metrics discussion but requires regular communication and collaboration with departments. What are the new requirements for compliance, security and auditing? What about analytics needs in the coming quarter and year? This information helps all IT departments optimize decisions for ongoing data management while still keeping costs and capacity in mind. For instance, by knowing that the R&D team always wants their data available for retesting for up to three months after a project, IT can keep it on the NAS for 90 days and then move it to a cloud data lake for long-term storage and potential cloud machine learning or artificial intelligence analysis. 


Best practices for migrating unstructured data

The quality of your data will have a direct impact on how successful your migration is. To assess data quality, you should first examine the data’s structure. You’ll need to ensure that all data is properly organized, labeled and formatted. You should also examine any external factors that affect data quality, such as errors in source files or duplicate entries. Once you have evaluated the data’s structure, you should look for any possible inconsistencies. Check for incorrect spelling, typos and any other errors that could affect the accuracy of your migration. You should also ensure that all data is up-to-date and accurate. It is critical to ensure that your unstructured data complies with applicable laws, regulations and industry standards such as HIPAA, GDPR, GxP, PCI-DSS and SOX. To maintain compliance at all stages, you’ll need to ensure that the data migration process meets all relevant requirements for the laws that apply to your business. To begin, make sure you have the right security measures to protect data in transit and storage. This might include encryption at rest and in transit as well as other technical safeguards.


5 technical skills aspiring IT architects need to learn

Much like code, IT architecture design is also prone to technical debt. Technical debt includes the cost of additional work required later as a consequence of choosing easy solutions instead of better approaches. The primary objective of creating technical debt is to prioritize delivery over proper design principles. In general, technical debt is bad and should be avoided. However, an experienced architect knows when to use technical debt to speed up delivery, reduce the time to market, and achieve better results. A part of an architect's role is to draw a roadmap for technical debts and then document, track, and address them promptly. Overly architected systems may exhibit unintended consequences like performance impacts and suboptimal user experience due to network latency and other factors. Synonymous with how database developers compromise on normalization rules and introduce data redundancy to improve query performance, architects sometimes need to compromise on architecture and design principles to specific objectives related to things like: Time to market; System performance; Customer experience.


ChatGPT is 'not particularly innovative,' and 'nothing revolutionary'

"OpenAI is not particularly an advance compared to the other labs, at all," said LeCun. "It's not only just Google and Meta, but there are half a dozen startups that basically have very similar technology to it," added LeCun. "I don't want to say it's not rocket science, but it's really shared, there's no secret behind it, if you will." LeCun noted the many ways in which ChatGPT, and the program upon which it builds, OpenAI's GPT-3, is composed of multiple pieces of technology developed over many years by many parties. "You have to realize, ChatGPT uses Transformer architectures that are pre-trained in this self-supervised manner," observed LeCun. "Self-supervised-learning is something I've been advocating for a long time, even before OpenAI existed," he said. "Transformers is a Google invention," noted LeCun, referring to the language neural net unveiled by Google in 2017, which has become the basis for a vast array of language programs, including GPT-3. The work on such language programs goes back decades, said LeCun.


Delegation: The biggest test for transformational CIOs

CIOs can take steps to minimize the risks of delegated decisions resulting in bad decisions by ensuring that the people to whom the IT organization delegates have the right skills and expertise, as well as an understanding of overall business goals and the architectural frameworks into which their decisions must fit. Perhaps the biggest concern is around cyber security, says Atkinson. “When you distribute decision making for the launch of technology environments, you risk having under-managed environments for cyber security purposes,” he says. CIOs can address this by establishing standards and encouraging more collaborative decision making. Royal Caribbean’s Poulter sees teamwork as an essential component of risk reduction. The security team is just one participant in a decision-making team that should include application, architecture, infrastructure, and other experts, she says. Giving teams the autonomy to come together to make cross-domain decisions is hugely important.


3 Data Management Rules to Live By

As companies continue to build out dedicated data teams and full-fledged data-centric organizations, look for a higher level of specialization to make its way to the management of the data stack. Here are just a few of the roles I expect to play a major part in managing the data stack in the future. The data product manager is responsible for managing the life cycle of a given data product and is often responsible for managing cross-functional stakeholders, product road maps, and other strategic tasks. The analytics engineer, a term made popular by dbt Labs, sits between a data engineer and analysts and is responsible for transforming and modeling the data such that stakeholders are empowered to trust and use that data. Analytics engineers are simultaneously specialists and generalists, often owning several tools in the stack and juggling many technical and less technical tasks. The data reliability engineer is dedicated to building more resilient data stacks, primarily via data observability, testing, and other common approaches. Data reliability engineers often possess DevOps skills and experience that can be directly applied to their new roles.



Quote for the day:

"Leadership, on the other hand, is about creating change you believe in." -- Seth Godin

Daily Tech Digest - January 27, 2023

The Evolution Of Internet Of Things

The IoT ecosystem is more than mere connected devices, nor is IIoT merely a matter of connecting plants and machinery to the edge or cloud storage. There are a whole lot of technologies involved in the process ranging from chips and sensors that capture data from physical assets to communication networks; advanced analytics, including machine learning and artificial intelligence; Simulation and collaborative tools, including digital twins; machine vision and human-machine interfaces; and security systems and protocols. Among the major players in the IoT/IIoT space are ABB, Amazon Web Services, Cisco Systems, General Electric, IBM Corporation, Intel Corporation, Microsoft, Oracle Corporation, Robert Bosch GmbH, Rockwell Automation and Siemens, besides several others. Industrial automation, as exemplified best by the progress made by the automobile industry over the last 100 years, increased productivity dramatically and reduced the cost of production. 


Why Founders Are Hiring These Two Coaches to Supercharge Their Business

What I want to encourage founders to do is invest in help, join a community or hire a coach, or get an advisor who can really be there in a more effective capacity, someone that you can bounce ideas off of, someone that you can be extremely honest with when things are going wrong. You need that support. You can't build the business alone. And it really takes a combination of mindset and strategic work. So when we work with founders, we build a strategic roadmap while we also work on this mindset and in their professional growth. We believe that you cannot have a successful company without both pieces of the puzzle. ... Here's what I say about juggling it all, is that, think about it as if you're juggling balls, and some of the balls are glass and some are rubber, and your clean house is a rubber ball and your health is a glass ball. So make sure that the balls that you're dropping are rubber and not glass. You'll always be dropping balls. And the other thing is everyone needs help. 


Difference Between Conversational AI and Generative AI

Conversational AI is the Artificial intelligence (AI) that can engage in conversation and refers to tools that allow users to communicate with virtual assistants or chatbots. They mimic human interactions by identifying speech and text inputs and translating their contents into other languentreeeeeeages using massive amounts of data, machine learning, and natural language processing. While Generative AI often uses deep learning techniques, like generative adversarial networks (GANs), to identify patterns and features in a given dataset before creating new data from the input data. Now that we have a fair idea of Conversational AI and Generative AI, let’s dive deeper into how they work and differ. In conversational AI the two major components are, Natural language processing (NLP) and machine learning are two major components to keep the AI algorithms up-to-date, these NLP operations interact with machine learning processes in a continual feedback loop. The fundamental elements of conversational AI enable it to process, comprehend, and produce responses naturally.


3 business application security risks businesses need to prepare for in 2023

As organizations ramp up their digital transformation efforts and transition between on-premises and cloud instances, they’re also increasingly bringing in web-facing applications. Applications that used to be kept behind enterprise “walls” in the days of on-premises-only environments are now fully exposed online, and cybercriminals have taken advantage. Given the myriad sensitive information kept within these applications, enterprises must ensure internet-facing vulnerabilities have the highest priority. ... While zero-day vulnerabilities are common entry points for threat actors, they also tend to pay close attention to patch release dates, as they know many enterprises fall behind in patching their vulnerabilities. Many patch management processes fail because security teams use manual methods to install security fixes, which takes up a significant portion of their already-limited time. As the number of patches piles up, it can be difficult to determine which patches must be applied first and which can be left for later.


Uncertainty persists, but enterprises rush to adopt network as a service

“Although enterprises can see the operational value NaaS could bring, they worry about the potentially higher total cost of ownership (TCO), day-to-day management challenges and risk of significant fluctuations in monthly bills,” Hayden added. “This leaves a massive challenge for communications service providers (CSPs).” The report acknowledged that CSPs have made large strides over the past few years as they look to leverage their underlying infrastructure to climb the digital value chain by delivering cloud-enabled integrated network services. It emphasises that accelerating NaaS adoption should be a top priority for CSPs as it offers a clear avenue towards network monetisation through over-the-top (OTT) service delivery. “CSPs must first invest heavily in their NaaS solution looking to integrate automation and drive platform openness,” Hayden recommended. “On top of this, they must look to develop a partnership ecosystem comprised of systems integrators and network service partners.”


What the FBI’s Hive takedown means for the ransomware economy

“Today’s disruption of the Russian Hive ransomware infrastructure underscores the historic international cooperation between law enforcement agencies. The International Ransomware Taskforce is having an impact,” said Tom Kellermann, CISM, senior VP of cyberstrategy at Contrast Security. However, Kellermann warns that there’s still more to be done to address the impunity of Russian state-backed cybergangs, who are free to engage in criminal activity internationally with little threat of prosecution. ... “Disrupting Hive is no doubt a victory, but the war is far from over,” said Kev Breen, director of cyber threat research at Immersive Labs. “While this action will have a short-term effect on the proliferation of ransomware, Hive operates under a ransomware-as-a-service (RaaS) model, meaning they use affiliates that are responsible for gaining the initial foothold and then dropping the ransomware payload. “With the proverbial head of this snake cut off, those affiliates will turn to other ransomware operators and pick up where they left off,” Breen said.


Data Lake Security: Dive into the Best Practices

The three key security risks facing data lakes are:Access control: With no database tables and more fluid permissions, access control is more challenging in a data lake. Moreover, permissions are difficult to set up and must be based on specific objects or metadata definitions. Commonly, employees across the company also have access to the lake, which contains personal data or data that falls under compliance regulations. With 58% of security incidents caused by insider threats, according to a commissioned Forrester Consulting study, employee access to sensitive data is a security nightmare if left unchecked. Data protection: Data lakes often serve as a singular repository for an organization’s information, making them a valuable target to attack. Without proper access controls in place, bad actors can gain access and obtain sensitive data from across the company. Governance, privacy, and compliance: Because employees from across the company can feed data into the data lake without inspection, some data may contain privacy and regulatory requirements that other data doesn’t. 


Why Securing Software Should Go Far Beyond Trusting Your Vendors

Securing a software supply chain against attacks takes knowing what elements in your system have the potential to be attacked. More than three-quarters (77%) of those BlackBerry surveyed said that, in the last 12 months, they discovered previously unknown participants within their software supply chain — entities they had not been monitoring for adherence to critical security standards. That’s even when these companies were already rigorously using data-encryption, Identity Access Management (IAM), and Secure Privileged Access Management (PAM) frameworks. As a result, malicious lines of code can sit in blind spots for years, ready to be exploited when the attacker chooses. The Cybersecurity and Infrastructure Security Agency (CISA), the National Security Agency (NSA), and the Office of the Director of National Intelligence (ODNI) recently issued a recommended practices guide for customers on securing the software supply chain. 


7 Insights From a Ransomware Negotiator

"We really like to focus on emphasizing communication when it comes to threat intelligence — whether it's threat intelligence talking with the SOC or the incident response team, or even vulnerability management," he says. "Getting an idea of what these trends look like, what the threat actors are focusing on, how much they pop up and go away, all of that is very valuable for the defenders to know." Even though the underlying TTPs of fulltime groups makes a lot of ransomware detection and response a little easier, there are still some big variables out there. For example, as many groups have employed the ransomware-as-a-service (RaaS) model, they employ a lot more affiliates, which means negotiators are always dealing with different people. "In the early days of ransomware, when you started negotiations, there was a good chance you were dealing with the same person if you were dealing with the same ransomware," Schmitt says. "But in today's ecosystem, there are just so many different groups, and so many different affiliates that are participating as part of these groups, that a lot of times you're almost starting from scratch."


The downsides of cloud-native solutions

One of the main issues with cloud-native development and deployment is that it can lead to vendor lock-in. When an application is built and deployed to a specific cloud provider, you typically use the native capabilities of that cloud provider. It can be difficult and costly to move to a different provider or an on-premises platform. This can limit the flexibility of the organization in terms of where they choose to run their applications. It flies in the face of what many believe to be a core capability of cloud-native development: portability. ... Another downside is that cloud-native development can be complex and require a different set of skills and tools compared to traditional on-premises and public cloud development. This can be a challenge for organizations that are not familiar with cloud-native practices and may require additional training and resources. I often see poorly designed cloud-native deployments because of this issue. If you’re not skilled in building and deploying these types of systems, the likely outcomes will be poorly designed, overly complex applications. 



Quote for the day:

"Who we are cannot be separated from where we're from." -- Malcolm Gladwell

Daily Tech Digest - January 26, 2023

Bringing IT and Business Closer Together: Aiming for Business Intimacy

“Businesses today are looking to drive new value from software, to increase competitiveness, open new revenue streams, and increase efficiencies,” he explains. “As part of this, the business often drives the software decisions, proof-of-concepts, vendor selection, and more.” It’s not until the end of the process that IT is brought in to “sign off and deploy”, and this siloed approach results in teams working separately, often producing poor results and driving animosity between the groups. “Instead, if the business and IT teams work together for the entire project, requirements are surfaced and expertise from across the organization is brought in to make the best possible decisions,” Maxey says. From his perspective, there are several best practices that can ensure closer alignment between IT and businesses. “Embed IT into the business unit, versus in a separate department and ask IT to project manage business software projects so they are always in discussions and aware of the process,” he says. 


IT leadership: Seven spectrums of choice for CIOs in 2023

Purpose is the first thing that we want people to be thinking about in light of the office shock that they have been going through. It’s a question for organizational leaders - what is the purpose of your organization? On the spectrum, we say that a purpose ranges from the individual to the collective. And it’s important to think about that because for an individual first starting out in the workplace, their purpose may be very straightforward in terms of supporting themselves and their family. But as they get further into their career, they can enlarge their thinking about a purpose that actually can make the world better. And the same thing is true for organizations – they may start out very focused on getting their business going, but later can think about how they can contribute to the world. And in that sense, another spectrum – outcomes – is very closely related. You may start out with your primary outcome being profit, but then once you’re established and comfortable, you can think much larger, like bringing prosperity to the world, whether that world is local or much larger.


The risks of 5G security

With 5G-enabled automated communications, machines and devices in homes, factories and on-the-go will communicate vast amounts of data with no human intervention, creating greater risk. Kayne McGladrey, field CISO at HyperProof and a member of IEEE, explained the dangers of such an approach. “Low-cost, high-speed and generally unmonitored networking devices provide threat actors a reliable and robust infrastructure for launching attacks or running command and control infrastructure that will take longer to detect and evict,” he said. McGladrey also pointed out that as organizations deploy 5G as a replacement for Wi-Fi, they may not correctly configure or manage the optional but recommended security controls. “While telecommunications providers will have adequate budget and staffing to ensure the security of their networks, private 5G networks may not and thus become an ideal target for a threat actor,” he said. 5G virtualized network architecture opens every door and window in the house to hackers because it creates — in fact, requires — an extraneous supply chain for software, hardware and services. 


Fujitsu: Quantum computers no threat to encryption just yet

Fujitsu said its researchers also estimate that it would be necessary for such a fault-tolerant quantum computer to work on the problem for about 104 days to successfully crack RSA. However, before anyone gets too complacent, it should be noted IBM's Osprey has three times the number of qubits that featured in its Eagle processor from the previous year, and the company is aiming to have a 4,158-qubit system by 2025. If it continues to advance at this pace, it may well surpass 10,000 qubits before the end of this decade. And we'd bet our bottom dollar intelligence agencies, such as America's NSA, are or will be all over quantum in case the tech manages to crack encryption. Quantum-resistant algorithms are therefore still worth the effort, even if the NSA is ostensibly skeptical of quantum computing's crypto-smashing powers. Fujitsu said that although its research indicates the limitations of quantum computing technology preclude the possibility of it beating current encryption algorithms in the short term, the IT giant will continue to evaluate the potential impact of increasingly powerful quantum systems on cryptography security.


State of DevOps: Success happens through platform engineering

The platform engineering team takes responsibility for designing and building self-service capabilities to minimise the amount of work developers need to do themselves. This, according to the report’s authors, enables fast-flow software delivery. Platform teams deliver shared infrastructure platforms to internal software development teams. The team responsible for the platform treats it as a product for its users, not just an IT project. ... Ronan Keenan, research director at Perforce, said the concepts behind platform engineering have been used on a small scale at large technology organisations for many years, but platform engineering provides a more concrete focus. “The concept is about building self-service capabilities which engineers and developers can use. This reduces their workload as they do not have to build these capabilities themselves,” he said, adding that a platform’s team builds and maintains shared IT infrastructure. By having such a shared infrastructure: “The software development process can run quicker since you are lightening the load on the developers and engineers. Platform engineering also offers a more consistent process.”


How Can Big Tech Layoffs be a Boon for the Quantum Computing Cloud?

The good news is that a skilled classical engineer can obtain the necessary knowledge from a variety of places, including online and short courses, to collaborate effectively with quantum physicists. Therefore, consider the possibility of recruiting someone with experience in conventional computing for those quantum organizations that are in desperate need of personnel to aid them in carrying out their goals. Not only might you discover that it’s simpler than you thought for these people to become productive in your organization, but they might also be able to use their prior experience working for traditional computing companies to their advantage and offer original solutions to any technical issues that arise there. However, the cloud may have a bright spot. The issue for quantum enterprises in finding appropriate people has frequently come up at conferences for the industry. Some of that was brought on in recent years by the fierce competition from the traditional computer companies, who increased their development efforts during the Covid years and also implemented work-from-home policies to make it simpler for someone to join an organization with its headquarters in a different city.


Attackers use portable executables of remote management software to great effect

The phishing emails are help desk-themed – e.g., impersonate the Geek Squad or GeekSupport – and “threaten” the recipient with the renewal of a pricy service/subscription. The goal is to get the recipient to call a specific phone number manned by the attackers, who then try to convince the target to install the remote management software. “CISA noted that the actors did not install downloaded RMM clients on the compromised host. Instead, the actors downloaded AnyDesk and ScreenConnect as self-contained, portable executables configured to connect to the actor’s RMM server,” the agency explained. “Portable executables launch within the user’s context without installation. Because portable executables do not require administrator privileges, they can allow execution of unapproved software even if a risk management control may be in place to audit or block the same software’s installation on the network. Threat actors can leverage a portable executable with local user rights to attack other vulnerable machines within the local intranet or establish long term persistent access as a local user service.”


The Anticipation Game: Spotlight on Data Backups

Regardless of how reliable a storage platform is, keeping all critical data stored in one place is a disaster waiting to happen for any organisation. To avoid the pains of security breaches, ransom payments, and data leaks, companies should aim to create and distribute backup copies across multiple onsite and offsite storage destinations. Another way to truly keep ransomware at bay is to apply immutability for backup data. Immutability means data is stored in such a way that it cannot be altered, deleted, or encrypted by ransomware. The ideal data backup solution should have a well-rounded set of ransomware protection and recovery features, allowing customers to achieve near-zero downtime and avoid paying ransom in return for access to the data. For example, the capability to store backups in ransomware-resilient Amazon S3 buckets and hardened Linux-based local repositories to prevent data deletion or encryption by ransomware. Ideally, IT admin teams would be able to leverage a backup to tape functionality to create air-gapped backups on tape to reduce the chance of ransomware encryption.


B2B integration is the backbone of a resilient supply chain: OpenText study

Advanced supply chain integration capabilities can help support more efficient and effective current approaches as well as new models that translate directly to business performance. ... B2B integration capabilities and processing align with top business priorities of reducing operational and logistical costs, faster time to market, improving data quality/accuracy and progressing visibility. Recognizing the need for a seamless B2B integration and a future-proof supply chain, OpenText offers a portfolio of end-to-end solutions through the OpenText Business Network Cloud. This network provides users with the ability to automate business processes and facilitate efficient, secure, and compliant collaboration between people, systems, and things – providing a true foundation for establishing an advanced digital backbone to help support business growth and transformation initiatives. By connecting to OpenText’s powerful suite of cloud applications via our secure, scalable and highly reliable OpenText Trading Grid platform, users can allow internal and external stakeholders to collaborate seamlessly across this single and central network to exchange transactions such as purchase orders, shipment notices and payment instructions.


Five steps to build a business case for data and analytics governance

The causal relationship between poor data and analytics and poor business performance must be highlighted if a compelling business case for governance is to be made. Initially, look to identify the business processes and process owners that are critical in addressing the problem statement. These will often span multiple business areas, so look to focus on key processes rather than on lines of business. This will help break down the silos that have led to the insular and disconnected governance of data and analytics. Determine the most impactful key performance indicators (KPIs) and key risk indicators (KRIs) for business success, and then identify the specific data and analytics assets that are used in the KPIs and KRIs. These assets are the ones that must fall within the scope of the data and analytics governance proposal. A key characteristic of highly successful D&A governance initiatives is their ability to effectively define and manage scope. Be clear on what is in scope and what is out of scope for governance while identifying the key stakeholders needed in the D&A governance steering group. 



Quote for the day:

"The litmus test for our success as Leaders is not how many people we are leading, but how many we are transforming into leaders" -- Kayode Fayemi