Daily Tech Digest - January 29, 2023

Data Mesh Architecture Benefits and Challenges

Data mesh architectures can help businesses find quick solutions to day-to-day problems, discover better ways to manage their resources, and develop more agile business models. Here is a quick review of data mesh architecture benefits: The data mesh architecture is adaptable, in the sense that it can adapt to changes as the company scales, changes, and grows: The data mesh enables data from disparate systems to be collected, integrated, and analyzed all at once, thus eliminating the need to extract data from disparate systems in one central location for further processing; Within a data mesh, the individual domain becomes a mini-enterprise and gains the power to self-manage and serve on all aspects of its Data Science and data processing projects; A data mesh architecture allows companies to increase efficiency by eliminating the data flow in a single pipeline, while protecting the system through centralized monitoring infrastructure; The domain teams can design and develop their need-specific, analytics, and operational use cases while maintaining full control of all their data products and services.


Uncovering the Value of Data & Analytics: Transformation With Targeted Reporting

Most of the time, (Cloud) Data & Analytics transformations are initially approved for implementation based on a solid business case with clear return expectations. However, programs often don’t have a functioning value framework to report on the business value generated from change and the progress toward the initial expectations. In such cases, the transformation impact for executives and business leaders is a “black box” with no clear indication of direction. As time passes and the costs associated with transformation programs increase due to scaling, an insufficient Value Reporting Framework can lead to loss in executive buy-in and reduction of investment budgets. Furthermore, with high market volatility, initiatives without a tangible influence on the company’s bottom line tend to be deprioritized quickly. On the more positive side, a high number of companies have robust value scorecards to track their transformation performance. However, metrics in these scorecards tend to be either too operational for executives to easily digest or focus exclusively on cost aspects. 


Elevating Security Alert Management Using Automation

Context — every security analyst says they need it, but everyone seems to have a different definition for it. If you’ve ever worked an alert queue and thought to yourself, “I wish I could stop these alerts from appearing right now” or “Why am I looking at activity that someone else is already triaging,” then this section is for you — within the first two weeks of deployment, this feature of the system reduced our alert volume by 25%, saving 3 to 4.5 hours of manual effort. In our alert management system, “context” is information derived from the alert payload that is used as metadata for suppression¹, deduplication², and metrics. Reduction of toil in the system is primarily attributed to its ability to use context to stop wasteful alerts from getting to the team. This creates the opportunity for the team to, for example, suppress alerts that we know require tuning by a detection engineer or ignore duplicate alerts for activity that is being investigated but may be on hold while we wait for additional information. These alerts are never dropped — they still flow through the rest of the system and generate a ticket — but they are not assigned to a person for triage.


Could A Data Breach Land Your CISO In Prison?

Why would a CISO worry about personally facing legal consequences for company cybersecurity decisions? I don’t have direct knowledge of Kissner’s motives. However, I do know that for the last several months CISOs have been talking to each other about how last October, a federal jury convicted the CISO of a major U.S company for covering up a data breach. The jury found Joe Sullivan, a former Chief Security Officer, guilty of obstructing justice and actively failing to report a felony—charges stemming from “bug bounty” payments he authorized to hackers who breached the company in 2016. The company was already responding to an investigation into a 2014 breach but did not inform the FTC about the new breach in 2016. Sullivan didn’t make that decision alone: others in the company were looped in, including then-CEO Travis Kalanick, the Chief Privacy Officer, and the company’s in-house privacy/security lawyer. Nevertheless, Sullivan was the only employee to face charges. How might CISOs handle their roles differently in a world where a poorly-handled breach won’t just get you fired—it might land you in prison?


The new age of exploration: Staking a claim in the metaverse

Spatial ownership is the essential concept that makes possible an open metaverse and 3D digital twin of the earth that is not built or controlled by a monopolistic entity. Spatial ownership enables users to own virtual land in the metaverse. It uses non-fungible tokens (NFTs), which represent a unique digital asset that can only have one official owner at a time and can’t be forged or modified. In the metaverse, users can buy NFTs linked to particular parcels of land that represent their ownership of these “properties.” Spatial ownership in the metaverse can be compared to purchasing web domains on today’s internet. As with physical real estate, some speculatively buy web domains hoping to sell the rights to a potentially popular or unique URL at a future date. In contrast, others purchase to lock down control and ownership over their own little portion of the web. Domains are similar to prime real estate in that almost every business needs one, and many brands will look for the same or similar names. The perfect domain name can help a business monopolize its market and get the lion’s share of web visibility in its niche.


Empowering Leadership in a VUCA World

The term VUCA (volatility, uncertainty, complexity, and ambiguity) aptly applies to the world we live in. Making business decisions has become incredibly complex, and we’re not just making traditional budget and managerial decisions. More than ever, leaders have to consider community impact, employee wellbeing, and business continuity under an extraordinary uncertainty. There are so many considerations for even the smallest decisions we make. The highly distributed nature of how people work today means we have to consider a broader potential impact of every statement and every choice. Leaders have the responsibility to think about equity when some employees are sitting in the room with you and others are remote. How much face time are you giving each? Are you treating instant messages the with the same level of attention as someone dropping into your office? This situation is not likely to be any less of a challenge for future leaders. It’s our responsibility as leaders, as people who impact the future of our businesses, to give all the people in our organizations an equal opportunity to contribute and grow. 


Using Artificial Intelligence To Tame Quantum Systems

Quantum computing has the potential to revolutionize the world by enabling high computing speeds and reformatting cryptographic techniques. That is why many research institutes and big-tech companies such as Google and IBM are investing a lot of resources in developing such technologies. But to enable this, researchers must achieve complete control over the operation of such quantum systems at very high speed, so that the effects of noise and damping can be eliminated. “In order to stabilize a quantum system, control pulses must be fast – and our artificial intelligence controllers have shown the promise to achieve such a feat,” Dr. Sarma said. “Thus, our proposed method of quantum control using an AI controller could provide a breakthrough in the field of high-speed quantum computing, and it might be a first step to achieving quantum machines that are self-driving, similar to self-driving cars. We are hopeful that such methods will attract many quantum researchers for future technological developments.”


Avoid a Wipeout: How To Protect Organisations From Wiper Malware

A 3-2-1-1 data-protection strategy is a best practice for defending against malware, including wiper attacks. This strategy entails maintaining three copies of your data, on two different media types, with one copy stored offsite. The final 1 in the equation is immutable object storage. By maintaining multiple copies of data, organisations will have backup available in case one copy is lost or corrupted. It is imperative in the event of a wiper attack, which destroys or erases data. Storing data on different media types also helps protect against wiper attacks. This way, if one type of media is compromised, you still have access to your data through the other copies. Keeping at least one copy of your data offsite, either in a physical location or in the cloud, provides an additional layer of protection. If a wiper attack destroys on-site copies of your data, you’ll still have access to your offsite backup. The final advantage is immutable object storage. Immutable object storage involves continuously taking snapshots of your data every 90 seconds, ensuring that you can quickly recover it even during a wiper attack.


How to use Microsoft KQL for SIEM insight

While KQL is easy to work with, you won’t get good results if you don’t understand the structure of your data. First, you need to know the names of all of the tables used in Sentinel’s workspace. These are needed to specify where you’re getting data from, with modifiers to take only a set number of rows and to limit how much data is returned. This data then needs to be sorted, with the option of taking only the latest results. Next, the data can be filtered, so for example, you’re only getting data from a specific IP range or for a set time period. Once data has been selected and filtered, it’s summarized. This creates a new table with only the data you’ve filtered and only in the columns you’ve chosen. Columns can be renamed as needed and can even be the product of KQL functions — for example summing data or using the maximum and minimum values for the data. The available functions include basic statistical operations, so you can use your queries to look for significant data — a useful tool when hunting suspected intrusions through gigabytes of logs. 


Leaders anticipate cyber-catastrophe in 2023, report World Economic Forum, Accenture

“I think we may see a significant event in the next year, and it will be one in the ICS/OT technologies space. Due to long life, lack of security by design (due in many cases to age) and difficulty to patch, in mission critical areas — an attack in this space would have immense effects that will be felt,” France said. “So I somewhat agree with the hypothesis of the report and the contributors to the survey. You could already argue that we have seen a moderate attack with UK Royal Mail, where ransomware stopped the sending of international parcels for a week or more,” France said. France argues that organizations can insulate themselves from these threats by putting more resources into defensive measures and by treating cybersecurity as a board issue. Key steps include Implementing responsive measures, providing employees with exercises on how to react, implementing recovery plans, planning for supply chain instability and looking for alternative vendors who can provide critical services in the event of a disruption.



Quote for the day:

“If we wait until we’re ready, we’ll be waiting for the rest of our lives.” -- Lemony Snicket

No comments:

Post a Comment