Daily Tech Digest - June 04, 2023

Insider risk management: Where your program resides shapes its focus

Choi says that while the information security team is ultimately responsible for the proactive protection of an organization’s information and IP, most of the actual investigation into an incident is generally handled by the legal and HR teams, which require fact-based evidence supplied by the information security team. “The CIO/CISO team need to be able to supply facts and evidence in a consumable, easy-to-understand fashion and in the right format so their legal and HR counterparts can swiftly and accurately conduct their investigation.” ... Water flows downhill and so does messaging on topics that many consider ticklish, such as IRM programs. Payne noted that “few, if any CEOs wish to discuss their threat risk management programs as it projects negativity — i.e., ‘we don’t trust you’ and they prefer to have positive messaging.” Few CISOs enjoy having an IRM program under their remit as “who wants to monitor their colleagues?” Payne adds, “Whacking external threats is easy; when it’s your colleague it becomes more problematic.”


What is the medallion lakehouse architecture?

The medallion architecture describes a series of data layers that denote the quality of data stored in the lakehouse. Databricks recommends taking a multi-layered approach to building a single source of truth for enterprise data products. This architecture guarantees atomicity, consistency, isolation, and durability as data passes through multiple layers of validations and transformations before being stored in a layout optimized for efficient analytics. The terms bronze (raw), silver (validated), and gold (enriched) describe the quality of the data in each of these layers. It is important to note that this medallion architecture does not replace other dimensional modeling techniques. Schemas and tables within each layer can take on a variety of forms and degrees of normalization depending on the frequency and nature of data updates and the downstream use cases for the data. Organizations can leverage the Databricks Lakehouse to create and maintain validated datasets accessible throughout the company. 


AppSec ‘Worst Practices’ with Tanya Janca

Having reasonable service-level agreements is so important. When I work with enterprise clients, they already have tons of software that’s in production doing its thing, but they’re also building and updating new stuff. So I have two service-level agreements and one is the crap that was here when I got here and the other stuff is all the beautiful stuff we’re making now. So I’ll set up my tools so that you can have a low vulnerability, but if it’s medium or above, it’s not going to production if it’s new. But all the stuff that was there when I scanned for the first time, we’re going to do a slower service-level agreement. That way we can chip away at our technical debt. The first time I came up with parallel SLAs was when this team lead asked, “Am I going to get fired because we have a lot of technical debt, and it would literally take us a whole year just to do the updates from the little software compositiony thing you were doing.” “No one’s getting fired!” I said. So that’s how we came up with the parallel SLAs so we could pay legacy technical debt down slowly like a student loan versus handling new development like credit card debt that gets paid every single month. There’s no running a ticket on the credit card!


Revolutionizing the Nine Pillars of DevOps With AI-Engineered Tools

Leadership Practices: Leadership is vital to drive cultural changes, set vision and goals, encourage collaboration and ensure resources are allocated properly. Strong leadership fosters a successful DevOps environment by empowering teams and supporting innovation. AI can assist leaders in decision-making by analyzing large datasets to identify trends and predict outcomes, providing valuable insights to guide strategic planning. Collaborative Culture Practices: DevOps thrives in a culture of openness, transparency and shared responsibility. It’s about breaking down the silos that can exist between different teams and promoting effective communication and collaboration. AI-powered tools can improve collaboration through smart recommendations, fostering more effective communication and knowledge sharing. Design-for-DevOps Practices: This involves designing software in a way that supports the DevOps model. This can include aspects like microservices architecture, modular design and considering operability and deployability from the earliest stages of design.


The ethics of innovation in generative AI and the future of humanity

Humans answer questions based on our genetic makeup (nature), education, self-learning and observation (nurture). A machine like ChatGPT, on the other hand, has the world’s data at its fingertips. Just as human biases influence our responses, AI’s output is biased by the data used to train it. Because data is often comprehensive and contains many perspectives, the answer that generative AI delivers depends on how you ask the question. AI has access to trillions of terabytes of data, allowing users to “focus” their attention through prompt engineering or programming to make the output more precise. This is not a negative if the technology is used to suggest actions, but the reality is that generative AI can be used to make decisions that affect humans’ lives. ... We have entered a crucial phase in the regulatory process for generative AI, where applications like these must be considered in practice. There is no easy answer as we continue to research AI behavior and develop guidelines


7 CIO Nightmares And How Enterprise Architects Can Help

The deeper you dig into cyber security, the more you find. Do you know what data your business actually needs to secure? A mission-critical application might be dependent on a spreadsheet in an outdated system. That data may be protected under regulation, but supplied from a cloud-based application that's reliant on open-source coding, and so on. Every CIO needs to know the top-ten, mission-critical, crown jewel applications and data centers that their business cannot live without, and what their connections and dependencies are. Each needs to have a clear plan of action in case of a security breach. The Solution: Mapping your tech stack with an enterprise architecture management (EAM) tool allows you to see exactly how mission critical each application is. This equates one-to-one with how much you need to invest in cyber security for each area. You can also gain clarity on which application is dependent on which platform. Likewise, you can find where crucial data is stored and where it feeds to.


7 Stages of Application Testing: How to Automate for Continuous Security

Pen testing allows organizations to simulate an attack on their web application, identifying areas of weaknesses that could be exploited by a malicious attacker. When done correctly, pen testing is an effective way to detect and remediate security vulnerabilities before they can be exploited. ... Traditional pen testing delivery often takes weeks to set up and the results are point in time. With the rise of DevOps and cloud technology, traditional once-a-year pen testing is no longer sufficient to ensure continuous security. To protect against emerging threats and vulnerabilities, organizations need to execute ongoing assessments: continuous application pen testing. Pen Testing as a Service (PTaaS) offers a more efficient process for proactive and continuous security compared to traditional pen testing approaches. Organizations are able to access a view into to their vulnerability finding in real time, via a portal that displays all relevant data for parsing vulnerabilities and verify the effectiveness of a remediation as soon as vulnerabilities are discovered.


Technological Innovation Poses Potential Risk of Rising Agricultural Product Costs

While technology has undeniably improved farming practices, its implementation requires significant financial investment. The upfront costs associated with purchasing advanced machinery, upgrading infrastructure, and adopting new technologies can burden farmers, particularly smaller-scale operations. These costs can ultimately be passed on to consumers, potentially leading to an increase in the prices of agricultural products. The seductive promises of cutting-edge machinery, precision agriculture, and genetically modified crops have mesmerised farmers worldwide. It is true, these technological marvels have unleashed unprecedented efficiency, capable of revolutionising the way we grow and harvest our sustenance. Yet, in their wake, they leave a trail of exorbitant expenses, shaking the very foundation of the agricultural landscape. ... Modern farming equipment is often equipped with advanced technology and features that improve efficiency, precision, and productivity.


Open Source Jira Alternative, Plane, Lands

Indeed, “Plane is a simple, extensible, open source project and product management tool powered by AI. It allows users to start with a basic task-tracking tool and gradually adopt various project management frameworks like Agile, Waterfall, and many more, wrote Vihar Kurama, co-founder and COO of Plane, in a blog post. Yet, “Plane is still in its early days, not everything will be perfect yet, and hiccups may happen. Please let us know of any suggestions, ideas, or bugs that you encounter on our Discord or GitHub issues, and we will use your feedback to improve on our upcoming releases,” the description said. Plane is built using a carefully selected tech stack, comprising Next.js for the frontend and Django for the backend, Kurama said. “We utilize PostgreSQL as our primary database and Redis to manage background tasks,” he wrote in the post. “Additionally, our architecture includes two microservices, Gateway and Pilot. Gateway serves as a proxy server to our database, preventing the overloading of our primary server, while Pilot provides the interface for building integrations. ...”


Emerging AI Governance is an Opportunity for Business Leaders to Accelerate Innovation and Profitability

Firstly, regulation can help establish clear guidelines and standards for developing and deploying AI systems, for example, standards in accuracy, reliability, and risk management. Such guidelines can provide a stable and predictable framework for innovation, reducing uncertainty and risk in AI system development. This will increase participation in the field from developers and encourage greater investment from public and private organizations, thereby boosting the industry as a whole. ... Governments and governance organizations have a strong history of successfully investing in AI technologies and their inputs (e.g., Open Data Institute, Horizon Europe), as well as acting as demand side stimulators for long-term, high-risk innovations that are the foundations of many of the technologies we use today. Such examples include innovation at DARPA that formed the foundations of the Internet, or financial support to novel technologies through subsidy systems e.g., consumer solar panels.



Quote for the day:

"Try not to become a man of success but a man of value." -- Albert Einstein

Daily Tech Digest - June 03, 2023

Is it Possible to Calculate Technology Debt?

Perhaps we should rename it Architectural Debt or even Organisational Debt? From an Enterprise Architecture standpoint, we talk about “People, Processes, and Technology,” all of which contribute to the debt over time and form a more holistic view of the real debt. It does not matter what it is called as long as there is consistency within the organisation and it has been defined, agreed and communicated. ... The absence of master data management, quality, data lineage, and data validation all contribute to data debt. People debt is caused by having to support out-of-date assets (software and/or infrastructure), the resulting deskilling over time and missed opportunity to reskill which all potentially leads to employee attrition. Processes requiring modification can become dependent on technology due to the high cost of change, or the alternative of adjusting the design to accommodate poorly designed processes. While Robotic Process Automation (RPA) can provide a rapid solution in such cases, it raises the question of whether the automation simply perpetuates flawed processes without addressing the underlying issue. 


There Are Four Types of Data Observability. Which One is Right for You?

Business KPI Drifts: Since data observability tools monitor the data itself, they are often used to track business KPIs just as much as they track data quality drifts. For example, they can monitor the range of transaction amounts and notify where spikes or unusual values are detected. This autopilot system will show outliers in bad data and help increase trust in good data. Data Quality Rule Building: Data observability tools have automated pattern detection, advanced profiling, and time series capabilities and, therefore, can be used to discover and investigate quality issues in historical data to help build and shape the rules that should govern the data going forward. Observability for a Hybrid Data Ecosystem: Today, data stacks consist of data lakes, warehouses, streaming sources, structured, semi-structured, and unstructured data, API calls, and much more. ... Unlike metadata monitoring that is limited to sources with sufficient metadata and system logs – a property that streaming data or APIs don’t offer – data observability cuts through to the data itself and does not rely on these utilities.


Why Companies Should Consider Developing A Chief Security Officer Position

The combination of the top-down and cross-functional influence of the CSO with the technical reach of the CISO should be key to creating and maintaining the momentum required to deliver change and break business resistance where it happens. In my experience, firms looking to implement this type of CSO position should start looking internally for the right executive: Ultimately the role is all about trust, and your candidate should have intimate knowledge of how to navigate the internal workings of the organization. I would recommend looking for someone that is an ambitious leader—not someone at an end-of-career position. Additionally, consider assigning this role to a seasoned executive. Someone you believe is motivated overall by the protection of the business from active threats, able to take an elevated long-term view where required, over and above the short-term fluctuations of any business. Demonstrating leadership in a field as complex should be seen as an opportunity to showcase skills that can be applied elsewhere in the organization.


Threatening botnets can be created with little code experience, Akamai finds

According to the research the Dark Frost actor is selling the tool as DDoS-for-hire exploit and as a spamming tool. “This is not the first exploit by this actor,” said West, who noted that the attacker favors Discord to openly tout their wares and brag. “He was taking orders there, and even posting screenshots of their bank account, which may or may not be legitimate.” ... The Dark Frost botnet uses code from the infamous Mirai botnet, which West said was easy to obtain, and highly effective in exploiting hundreds of machines, and is therefore emblematic of how, with source code from previously successful malware strains and AI code generation, someone with minimal knowledge can launch botnets and malware. “The author of Mirai put out the source code for everyone to see, and I think that it started and encouraged the trend of other malware authors doing the same, or of security researchers publishing source code to get a bit of credibility,” said West.


Experts say stopping AI is not possible — or desirable

"These systems are not imputed with the capability to do all the things that they're now able to do. We didn’t program GPT-4 to write computer programs but it can do that, particularly when it’s combined with other capabilities like code interpreter and other programs and plugins. That’s exciting and a little daunting. We’re trying to get our hands wrapped around risk profiles of these systems. The risk profiles, which are evolving literally on a daily basis. “That doesn't mean it's all net risk. There are net benefits as well, including in the safety space. I think [AI safety research company] Anthropic is a really interesting example of that, where they are doing some really interesting safety testing work where they are asking a model to be less biased and at a certain size they found it will literally produce output that is less biased simply by asking it. So, I think we need to look at how we can leverage some of those emerging capabilities to manage the risk of these systems themselves as well as the risk of what’s net new from these emerging capabilities.”


How IT can balance local needs and global efficiency in a multipolar world

Technical architecture solutions, such as microservices, can help companies balance the level of local solution tailoring with the need to harness scale efficiencies. While not new, these solutions are more widely accepted and can be more easily realized in modern cloud platforms. These developments are enabling leading companies to evolve their operating models by building standardized, modular, and configurable solutions that maximize business flexibility and efficiency while making data management more transparent ... However useful these localization capabilities are, they will not work as needed unless local teams have sufficient autonomy (at some companies, local teams in China, for example, clear decisions through central headquarters, which is a major roadblock for pace and innovation). The best companies provide local teams with specific decision rights within guidelines and support them by providing necessary capabilities, such as IT talent embedded with local market teams to get customer feedback early.


Constructing the innovation mandate

We need to understand successful innovation actually touches all aspects of a business, by contributing to improving business processes, identifying new, often imaginative, ways to reduce costs, building out existing business models into new directions and value and discovering new ways and positioning into markets. To get to a consistent performance of innovation and creativity within organizations you do need to rely on a process, structure and the consistent ability to foster a culture of innovation. An innovation mandate is a critical tool for defining the scope and direction of innovation and the underlying values, commitment and resources placed behind it. Normally this innovation mandate comes in the form of a document, generally build up by a small team of senior leaders, innovation experts and subject matter experts. That group should possess a deep understanding of the existing organization’s strategy, business models, operations and culture and a wider appreciation of the innovation landscape, the “fields of opportunity” and the emerging practices of innovation management.


3 Unexpected Technology Duos That Will Supercharge Your Marketing

While geofencing isn't the newest technology to enter the marketing spectrum, it is improving exponentially day by day. Geofencing creates virtual geographic boundaries around targeted areas, and when someone crosses into one of those areas, it creates a triggered response — your ads will show up while they're browsing their favorite sites or checking their email. ... Website content can be a major trust builder for your businesses and therefore can play a vital part in turning an interested prospect into a buying customer. But many a business owner has cringed at the thought of writing copy for their website ... let alone regularly updating it with blog posts or e-newsletter articles. Creating large amounts of content can be a constant challenge for business owners, and I get it. You're already busy running a business! But what I want small business owners to realize is that they have access to many tools — some of them free — that will do 95% of the writing for you.


The Evolution of the Chief Privacy Officer

Given the natural overlap between privacy, security and the uses of data, strategic cooperation is key. “It’s about building a strategy together to develop an enterprise approach,” Jones said. “My role is to build privacy and transparency into every state system and application and business process at every stage of the life cycle.” Cotterill looks to Indiana’s IT org chart to help define the spheres of responsibility. The governor appoints the chief information officer and chief data officer, and the CISO and CPO report to each of them, respectively. “The CIO, and the CISO reporting to him, they’re focused on providing cost-effective, secure, consistent, reliable enterprise IT services and products,” he said. “For the CDO, with the CPO reporting to him … we have a threefold mission: to empower innovation, enable the use of open data, and do that all while maintaining data privacy.” IT provides “that secure foundation to do business,” while he and the CDO “are focused on the substantive use of data to drive decisions and improve outcomes,” he said.


Should Data Engineers be Domain Competent?

A traditional data engineer views a table with one million records as relational rows that must be crunched, transported and loaded to a different destination. In contrast, an application programmer approaches the same table as a set of member information or pending claims that impact life. The former is a pureplay, technical view, while the latter is more human-centric. These drastically differing lenses form the genesis of the data siloes ... When we advocate domain knowledge, let’s not relegate it to a few business analysts who are tasked to translate a set of high-level requirements into user stories. Rather domain knowledge implies that every data engineer gets a grip on the intrinsic understanding of how functionality flows and what it tries to accomplish. Of course, this is easier to preach than practice, as expecting a data team to understand thousands of tables and millions of rows is akin to expecting them to navigate a freeway in peak time on the reverse gear with blindfolds. It will be a disastrous. While its amply evident that data teams need domain knowledge, it’s hard to expect that centralized data teams will deliver efficient results. 



Quote for the day:

"Leaders are visionaries with a poorly developed sense of fear and no concept of the odds against them. " -- Robert Jarvik

Daily Tech Digest - June 02, 2023

A Data Scientist’s Essential Guide to Exploratory Data Analysis

Analyzing the individual characteristics of each feature is crucial as it will help us decide on their relevance for the analysis and the type of data preparation they may require to achieve optimal results. For instance, we may find values that are extremely out of range and may refer to inconsistencies or outliers. We may need to standardize numerical data or perform a one-hot encoding of categorical features, depending on the number of existing categories. Or we may have to perform additional data preparation to handle numeric features that are shifted or skewed, if the machine learning algorithm we intend to use expects a particular distribution. ... For Multivariate Analysis, best practices focus mainly on two strategies: analyzing the interactions between features, and analyzing their correlations. ... Interactions let us visually explore how each pair of features behaves, i.e., how the values of one feature relate to the values of the other. 


Resilient data backup and recovery is critical to enterprise success

So, what must IT leaders consider? The first step is to establish data protection policies that include encryption and least privilege access permissions. Businesses should then ensure they have three copies of their data – the production copy already exists and is effectively the first copy. The second copy should be stored on a different media type, not necessarily in a different physical location (the logic behind it is to not store your production and backup data in the same storage device). The third copy could or should be an offsite copy that is also offline, air-gapped, or immutable (Amazon S3 with Object Lock is one example). Organizations also need to make sure they have a centralized view of data protection across all environments for greater management, monitoring and governance, and they need orchestration tools to help automate data recovery. Finally, organizations should conduct frequent backup and recovery testing to make sure that everything works as it should.


Data Warehouse Architecture Types

Different architectural approaches offer unique advantages and cater to varying business requirements. In this comprehensive guide, we will explore different data warehouse architecture types, shedding light on their characteristics, benefits, and considerations. Whether you are building a new data warehouse or evaluating your existing architecture, understanding these options will empower you to make informed decisions that align with your organization’s goals. ... Selecting the right data warehouse architecture is a critical decision that directly impacts an organization’s ability to leverage its data assets effectively. Each architecture type has its own strengths and considerations, and there is no one-size-fits-all solution. By understanding the characteristics, benefits, and challenges of different data warehouse architecture types, businesses can align their architecture with their unique requirements and strategic goals. Whether it’s a traditional data warehouse, hub-and-spoke model, federated approach, data lake architecture, or a hybrid solution, the key is to choose an architecture that empowers data-driven insights, scalability, agility, and flexibility.


What is federated Identity? How it works and its importance to enterprise security

FIM has many benefits, including reducing the number of passwords a user needs to remember, improving their user experience and improving security infrastructure. On the downside, federated identity does introduce complexity into application architecture. This complexity can also introduce new attack surfaces, but on balance, properly implemented federated identity is a net improvement to application security. In general, we can see federated identity as improving convenience and security at the cost of complexity. ... Federated single sign-on allows for sharing credentials across enterprise boundaries. As such, it usually relies on a large, well-established entity with widespread security credibility, organizations such as Google, Microsoft, and Amazon, for example. In this case, applications are usually gaining not just a simplified login experience for their users, but the impression and actual reliance on high-level security infrastructure. Put another way, even a small application can add “Sign in with Google” to its login flow relatively easily, giving users a simple login option, which keeps sensitive information in the hands of the big organization.


Millions of PC Motherboards Were Sold With a Firmware Backdoor

Given the millions of potentially affected devices, Eclypsium’s discovery is “troubling,” says Rich Smith, who is the chief security officer of supply-chain-focused cybersecurity startup Crash Override. Smith has published research on firmware vulnerabilities and reviewed Eclypsium’s findings. He compares the situation to the Sony rootkit scandal of the mid-2000s. Sony had hidden digital-rights-management code on CDs that invisibly installed itself on users’ computers and in doing so created a vulnerability that hackers used to hide their malware. “You can use techniques that have traditionally been used by malicious actors, but that wasn’t acceptable, it crossed the line,” Smith says. “I can’t speak to why Gigabyte chose this method to deliver their software. But for me, this feels like it crosses a similar line in the firmware space.” Smith acknowledges that Gigabyte probably had no malicious or deceptive intent in its hidden firmware tool. But by leaving security vulnerabilities in the invisible code that lies beneath the operating system of so many computers, it nonetheless erodes a fundamental layer of trust users have in their machines. 


Minimising the Impact of Machine Learning on our Climate

There are several things we can do to mitigate the negative impact of software on our climate. They will be different depending on your specific scenario. But what they all have in common is that they should strive to be energy-efficient, hardware-efficient and carbon-aware. GSF is gathering patterns for different types of software systems; these have all been reviewed by experts and agreed on by all member organisations before being published. In this section we will cover some of the patterns for machine learning as well as some good practices which are not (yet?) patterns. If we divide the actions after the ML life cycle, or at least a simplified version of it, we get four categories: Project Planning, Data Collection, Design and Training of ML model and finally, Deployment and Maintenance. The project planning phase is the time to start asking the difficult questions, think about what the carbon impact of your project will be and how you plan to measure it. This is also the time to think about your SLA; overcommitting to strict latency or performance metrics that you actually don’t need can quickly become a source of emission you can avoid.


5 ways AI can transform compliance

Compliance is all about controls. Data must be classified according to multiple rules, and the movement and access to that data recorded. It’s the perfect task for AI. Ville Somppi, vice president of industry solutions at M-Files, says: “Thanks to AI, organisations can automatically classify information and apply pre-defined compliance rules. In the case of choosing the right document category from a compliance perspective, the AI can be trained quickly with a small sample set categorised by people. This is convenient, especially when people can still correct wrong suggestions in the beginning of the learning process. ... Data pools are too big for humans to comb through. AI is the only way. In some sectors, adoption of AI has been delayed owing to regulatory issues. However, full deployment ought now to be possible. Gabriel Hopkins chief product officer at Ripjar, says: “Banks and financial services companies face complex responsibilities when it comes to compliance activities, especially with regard to combatting the financing of terrorism and preventing laundering or criminal proceeds.


Former Uber CSO Sullivan on Engaging the Security Community

CISO is a lonely role. There's a really amazing camaraderie between security executives that I'm not sure exists in any other kind of leadership role. The CISO role is pretty new compared to the other leadership roles. It's far from settled what kind of background is ideal for the role. It's far from settled where the person in the role should report. It’s far from settled what kind of a budget you're going to get. It's far from settled in terms of what type of decision-making power you're going to have. So, as a result, I think security leaders often feel lonely and on an island. They have an executive team above them that expects them to know all the answers about security, and then they have a team underneath them that expects them to know all the answers about security. So, they can't betray ignorance to anybody without undermining their role. And so, the security leader community often turns to each other for support, for guidance. There are a good number of Slack channels and conferences that are just CISOs talking through the role and asking for best practices and advice on how to deal with hard situations.


Google Drive Deficiency Allows Attackers to Exfiltrate Workspace Data Without a Trace

Mitiga reached out to Google about the issue, but the researchers said they have not yet received a response, adding that Google's security team typically doesn't recognize forensics deficiencies as a security problem. This highlights a concern when working with software-as-a-service (SaaS) and cloud providers, in that organizations that use their services "are solely dependent on them regarding what forensic data you can have," Aspir notes. "When it comes to SaaS and cloud providers, we’re talking about a shared responsibility regarding security because you can't add additional safeguards within what is given." ... Fortunately, there are steps that organizations using Google Workspace can take to ensure that the issue outlined by Mitiga isn't exploited, the researchers said. This includes keeping an eye out for certain actions in their Admin Log Events feature, such as events about license assignments and revocations, they said.


How defense contractors can move from cybersecurity to cyber resilience

We’re thinking way too small about a coordinated cyberattack’s capacity for creating major disruption to our daily lives. One recent, vivid illustration of that fact happened in 2022, when the Russia-linked cybercrime group Conti launched a series of prolonged attacks on the core infrastructure of the country of Costa Rica, plunging the country into chaos for months. Over a period of two weeks, Conti tried to breach different government organizations nearly every day, targeting a total of 27 agencies. Soon after that, the group launched a separate attack on the country’s health care system, causing tens of thousands of appointments to be canceled and patients to experience delays in getting treatment. The country declared a national emergency and eventually, with the help of allies around the world including the United States and Microsoft, regained control of its systems. The US federal government’s strict compliance standards often impede businesses from excelling beyond the most basic requirements. 



Quote for the day:

"Uncertainty is not an indication of poor leadership; it underscores the need for leadership." -- Andy Stanley

Daily Tech Digest - June 01, 2023

Throw out all those black boxes and say hello to the software-defined car

Software-defined vehicles might give automakers more flexibility in terms of the features and functions they can create, but it comes with some headaches on their end, including ensuring that a car works in each market where it's offered. "All the requirements are different for each region, and the complexity is so high. And from my perspective, this is the biggest challenge for engineers. Complexity is so high, especially if you sell cars worldwide. It is not easy. So in the past, we had this world car, so you bring one car for each market. We are not able to bring this world car for all regions anymore," Hoffmann told me. "In the past, it was not easy, but it was very clear—more performance, more efficiency, focus on design. And now that's changed dramatically. So software became very important; you have to focus on the ecosystem, and it is very, very complex. For each region you have, you have dedicated and different ecosystems," he said. ... The move to software-defined vehicles complicates this, as it applies to software as well as hardware. That means each update needs to be signed off by a regulator before being sent out over the air.


Staying ahead: How the best CEOs continually improve performance

Between three and five years into their tenure, the best CEOs typically combine what they’ve gained from their expanded learning agenda and their self-directed outsider’s perspective to form a point of view on what the next performance S-curve is for their company. The concept of the S-curve is that, with any strategy, there’s a period of slow initial progress as the strategy is formed and initiatives are launched. That is followed by a rapid ascent from the cumulative effect of initiatives coming to fruition and then by a plateau where the value of the portfolio of strategic initiatives has largely been captured. Dominic Barton, McKinsey’s former global managing partner, describes why managing a series of S-curves is important: “No one likes change, so you need to create a rhythm of change. Think of it as applying ‘heart paddles’ to the organization.” Former Best Buy CEO Hubert Joly describes why and how he applied heart paddles to his organization, moving from one S-curve to another: “We started with a turnaround, something we called ‘Renew Blue.’ 


Cloud Security: Don’t Confuse Vendor and Tool Consolidation

Unfortunately, simply buying solutions from fewer vendors doesn’t necessarily deliver the operation efficiencies or efficacy of security coverage — that entirely depends on the nature of those solutions, how integrated they are and how good the user experience is that they provide. If you’re an in-the-trenches application developer or security practitioner, consolidating cybersecurity-tool vendors might not mean much to you. If the vendor that your business chooses doesn’t offer an integrated platform, you’re still left juggling multiple tools. You are constantly toggling between screens and dealing with the productivity hit that comes with endless context switching. You have to move data manually from one tool to another to aggregate, normalize, reconcile, analyze and archive it. You have to sit down and think about which alerts to prioritize because each tool is generating different alerts, and without tooling integrations, one tool is incapable of telling you how an issue it has surfaced might (or might not) be related to an alert from a different tool.


Deconstructing DevSecOps: Why A DevOps-Centric Approach To Security Is Needed In 2023

DevSecOps, in reality, is actually more of a bridge building exercise: DevOps are asked to be that bridge to the security teams. Yet, simultaneously, DevOps are asked to enhance the technology used (for example, strong customer authentication, or SCA for short) often without the full input of security teams and so new potential for risk is introduced. These are DevOps security tasks, in effect, rather than DevSecOps. These need to be approached from the top down and bottom up: an organisational risk assessment to prioritise the software security tasks, and then a bottom-up modelling of how to incorporate something like SCA in our example. This is a DevOps-centric approach to security rather than the commonly accepted DevSecOps one. ... Security risks cover the entire software lifecycle from the initial open source building blocks right through to deployed and in production. Understanding this level of maturity is essential to a DevOps-centric approach, with a shift right being equally important to the shift-left focus of old. You can think of this as modernising DevSecOps, reducing alert 'noise' within developer range, and ensuring contextual threat levels are brought into focus. 


Why 'Data Center vs. Cloud' Is the Wrong Way to Think

If you think in more nuanced ways about how data centers relate to the cloud, you'll realize that terms like "data center vs. cloud" just don't make sense. There are several reasons why. First and foremost, data centers are an integral part of public clouds. If you move your workload to AWS, Azure, or another public cloud, it's hosted in a data center. The difference between the cloud and private data centers is that in the cloud, someone else owns and manages the data center. ... A second reason why it's tricky to compare data centers to the cloud is that not all workloads that exist outside of the public cloud are hosted in private data centers dedicated to handling just one business's applications and data. ... Another cause for blurred lines between data centers and the cloud is that in certain cases, you can obtain services inside private data centers that resemble those most closely associated with the public cloud. I'm thinking here of offerings like Equinix Metal, which is essentially an infrastructure-as-a-service (IaaS) solution that allows companies to stand up servers on-demand inside colocation centers. 


Tales of Kafka at Cloudflare: Lessons Learnt on the Way to 1 Trillion Messages

With an event-driven system, to avoid coupling, systems shouldn't be aware of each other. Initially, we had no enforced message format and producer teams were left to decide how to structure their messages. This can lead to unstructured communication and pose a challenge if the teams don't have a strong contract in place, with an increased number of unprocessable messages. To avoid unstructured communication, the team searched for solutions within the Kafka ecosystem and found two viable options, Apache Avro and protobuf, with the latter being the final choice. We had previously been using JSON, but found it difficult to enforce compatibility and the JSON messages were larger compared to protobuf. ... Based on Kafka connectors, the framework enables engineers to create services that read from one system and push it to another one, like Kafka or Quicksilver, Cloudflare's Edge database. To simplify the process, we use Cookiecutter to template the service creation, and engineers only need to enter a few parameters into the CLI.


Agile & UX: a failed marriage?

Where should UX teams sit in an Agile organisation? I have worked with companies where they’ve resided in engineering/technology, product, customer experience, digital, and even their own vertical. The choice of where the function sits should be based on organisational maturity, for example, newer companies tend to have them bundled with engineering (and therefore the designers tend to be UI designers who are helping the front end developers code) and more mature ones might to have them sit in either product or standalone orgs. The challenge is what follows. Most companies that are Agile tend to have cross-functional mission teams working on a product or feature. In the case study, we saw that there were two distinct teams: first, the business and architecture group and second the PO and their Agile delivery squad. Hidden behind this seemingly simple structure is much more complexity. For example, while UX teams work with the PO and their squad, they have a role to play, arguably a fundamental one in helping the business and solution architects understand the sort of experience that will emerge (and therefore should be considered when estimating timeframes/investments).=


Hybrid working: the new workplace normal

Some enterprises are allowing teams within the organization to decide whether to continue to work from home or come back to the office for a few days a week. But the transition is creating a new set of challenges: Since many organizations reduced their office real estate footprint during the pandemic, scheduling problems now crop up when multiple teams are doing “in-office” days simultaneously and vying for space and resources such as meeting rooms and videoconferencing equipment. The rise of this “hoteling” concept can create new headaches for operations and IT teams. One constant among the attendees is the technology gap increasingly associated with a hybrid or remote workforce. Employees returning to the workplace are discovering that it is no longer a plug-and-play environment. Downsizing, moving, and years of work-at-home technology often lead to frustrating searches for the right cable to connect, the right power adapter, and proper training for the new audioconferencing bridge that they never learned how to use.


How generative AI regulation is shaping up around the world

Laws relating to regulation of AI in Canada are currently subject to a mixture of data privacy, human rights and intellectual property legislation on a state-to-state basis. However, an Artificial Intelligence and Data Act (AIDA) is planned for 2025 at the earliest, with drafting having begun under the Bill C-27, the Digital Charter Implementation Act, 2022. An in-progress framework for managing the risks and pitfalls of generative AI, as well as other areas of this technology across Canada, aims to encourage responsible adoption, with consultations reportedly planned with stakeholders. ... The Indian government announced in March 2021 that it would apply a “light touch” to AI regulation in the aim of maintaining innovation across the country, with no immediate plans for specific regulation currently. Opting against regulation of AI growth, this area of tech was identified by the Ministry of Electronics and IT as “significant and strategic”, but the agency stated that it would put in place policies and infrastructure measures to help combat bias, discrimination and ethical concerns.


Good Cop, Bad Cop: Investigating AI for Policing

On a brighter note, police departments with real-time crime centers, as well as regional intelligence centers, can benefit from AI technology due to the massive amounts of data pouring in from multiple sources. AI can effectively sort through and prioritize such data in real time to allow faster and more targeted responses to unfolding situations. Perhaps most critically, law enforcement agencies can turn to AI for assistance during unfolding incidents. “A 911 dispatching system, emergency management watch center, or real-time crime center embedded with assistive AI can analyze data from multiple sources, such as cameras, sensors and databases, to gain insights that might otherwise go unseen during a fast-moving situation or investigation,” Sims says. Hara notes that AI is already playing an important role in several key law enforcement areas. He points to crowd management as an example. “AI will understand how many people are expected at a location and alert officials to a variance,” Hara says. AI can also play a critical role in school safety, taking advantage of the surveillance cameras many schools have already installed.


Why the Document Model Is More Cost-Efficient Than RDBMS

A common objection from customers before they try a NoSQL database like MongoDB Atlas is that their developers already know how to use RDBMS, so it is easy for them to “stay the course.” Believe me when I say that nothing is easier than storing your data the way your application actually uses it. A proper document data model mirrors the objects that the application uses. It stores data using the same data structures already defined in the application code using containers that mimic the way the data is actually processed. There is no abstraction between the physical storage or increased time complexity to the query. The result is less CPU time spent processing the queries that matter. One might say this sounds a bit like hard-coding data structures into storage like the HMS systems of yesteryear. So what about those OLAP queries that RDBMS was designed to support? MongoDB has always invested in APIs that allow users to run the ad hoc queries required by common enterprise workloads. 



Quote for the day:

“You never know how strong you are until being strong is the only choice you have.” -- Bob Marley

Daily Tech Digest - May 31, 2023

5 best practices for software development partnerships

“The key to successful co-creation is ensuring your partner is not just doing their job, but acting as a true strategic asset and advisor in support of your company’s bottom line,” says Mark Bishopp, head of embedded payments/finance and partnerships at Fortis. “This begins with asking probing questions during the prospective stage to ensure they truly understand, through years of experience on both sides of the table, the unique nuances of the industries you’re working in.” Beyond asking questions about skills and capabilities, evaluate the partner’s mindset, risk tolerance, approach to quality, and other areas that require alignment with your organization’s business practices and culture. ... To eradicate the us-versus-them mentality, consider shifting to more open, feedback-driven, and transparent practices wherever feasible and compliant. Share information on performance issues and outages, have everyone participate in retrospectives, review customer complaints openly, and disclose the most challenging data quality issues.


Revolutionizing Algorithmic Trading: The Power of Reinforcement Learning

The fundamental components of a reinforcement learning system are the agent, the environment, states, actions, and rewards. The agent is the decision-maker, the environment is what the agent interacts with, states are the situations the agent finds itself in, actions are what the agent can do, and rewards are the feedback the agent gets after taking an action. One key concept in reinforcement learning is the idea of exploration vs exploitation. The agent needs to balance between exploring the environment to find out new information and exploiting the knowledge it already has to maximize the rewards. This is known as the exploration-exploitation tradeoff. Another important aspect of reinforcement learning is the concept of a policy. A policy is a strategy that the agent follows while deciding on an action from a particular state. The goal of reinforcement learning is to find the optimal policy, which maximizes the expected cumulative reward over time. Reinforcement learning has been successfully applied in various fields, from game playing (like the famous AlphaGo) to robotics (for teaching robots new tasks).


Data Governance Roles and Responsibilities

Executive-level roles include leadership in the C-suite at the organization’s top. According to Seiner, people at the executive level support, sponsor, and understand Data Governance and determine its overall success and traction. Typically, these managers meet periodically as part of a steering committee to cover broadly what is happening in the organization, so they would add Data Governance as a line item, suggested Seiner. These senior managers take responsibility for understanding and supporting Data Governance. They keep up to date on Data Governance progress through direct reports and communications from those at the strategic level. ... According to Seiner, strategic members take responsibility for learning about Data Governance, reporting to the executive level about the program, being aware of Data Governance activities and initiatives, and attending meetings or sending alternates. Moreover, this group has the power to make timely decisions about Data Governance policies and how to enact them. 


Effective Test Automation Approaches for Modern CI/CD Pipelines

Design is not just about unit tests though. One of the biggest barriers to test automation executing directly in the pipeline is that the team that deals with the larger integrated system only starts a lot of their testing and automation effort once the code has been deployed into a bigger environment. This wastes critical time in the development process, as certain issues will only be discovered later and there should be enough detail to allow testers to at least start writing the majority of their automated tests while the developers are coding on their side. This doesn’t mean that manual verification, exploratory testing, and actually using the software shouldn’t take place. Those are critical parts of any testing process and are important steps to ensuring software behaves as desired. These approaches are also effective at finding faults with the proposed design. However, automating the integration tests allows the process to be streamlined.


What Does Being a Cross-Functional Team in Scrum Mean?

By bringing together individuals with different skills and perspectives, these teams promote innovation, problem-solving, and a holistic approach to project delivery. They reduce handoffs, bottlenecks, and communication barriers often plaguing traditional development models. Moreover, cross-functional teams enable faster feedback cycles and facilitate continuous improvement. With all the necessary skills in one team, there's no need to wait for handoffs or external dependencies. This enables quicker decision-making, faster iterations, and the ability to respond to customer feedback promptly. In short, being a cross-functional Scrum Team means having a group of individuals with diverse skills, a shared sense of responsibility, and a collaborative mindset. They work together autonomously, leveraging their varied expertise to deliver high-quality software increments. ... Building genuinely cross-functional Scrum Teams starts with product definition. This means identifying and understanding the scope, requirements, and goals of the product the team will work on. 


The strategic importance of digital trust for modern businesses

Modern software development processes, like DevOps, are highly automated. An engineer clicks a button that triggers a sequence of complicated, but automated, steps. If a part of this sequence (e.g., code signing) is manual then there is a likelihood that the step may be missed because everything else is automated. Mistakes like using the wrong certificate or the wrong command line options can happen. However, the biggest danger is often that the developer will store private code signing keys in a convenient location (like their laptop or build server) instead of a secure location. Key theft, misused keys, server breaches, and other insecure processes can permit code with malware to be signed and distributed as trusted software. Companies need a secure, enterprise-level code signing solution that integrates with the CI/CD pipeline and automated DevOps workflows but also provides key protection and code signing policy enforcement.


Managing IT right starts with rightsizing IT for value

IT financial management — sometimes called FinOps — is overlooked in many organizations. A surprising number of organizations do not have a very good handle on the IT resources being used. Another way of saying this is: Executives do not know what IT they are spending money on. CIOs need to make IT spend totally transparent. Executives need to know what the labor costs are, what the application costs are, and what the hardware and software costs are that support those applications. The organization needs to know everything that runs — every day, every month, every year. IT resources need to be matched to business units. IT and the business unit need to have frank discussions about how important that IT resource really is to them — is it Tier One? Tier Two? Tier Thirty? In the data management space — same story. Organizations have too much data. Stop paying to store data you don’t need and don’t use. Atle Skjekkeland, CEO at Norway-based Infotechtion, and John Chickering, former C-level executive at Fidelity, both insist that organizations, “Define their priority data, figure out what it is, protect it, and get rid of the rest.”


Implementing Risk-Based Vulnerability Discovery and Remediation

A risk-based vulnerability management program is a complex preventative approach used for swiftly detecting and ranking vulnerabilities based on their potential threat to a business. By implementing a risk-based vulnerability management approach, organizations can improve their security posture and reduce the likelihood of data breaches and other security events. ... Organizations should still have a methodology for testing and validating that patches and upgrades have been appropriately implemented and would not cause unanticipated flaws or compatibility concerns that might harm their operations. Also, remember that there is no "silver bullet": automated vulnerability management can help identify and prioritize vulnerabilities, making it easier to direct resources where they are most needed. ... Streamlining your patching management is another crucial part of your security posture: an automated patch management system is a powerful tool that may assist businesses in swiftly and effectively applying essential security fixes to their systems and software.


Upskilling the non-technical: finding cyber certification and training for internal hires

“If you are moving people into technical security from other parts of the organization, look at the delta between the employee's transferrable skills and the job they’d be moving into. For example, if you need a product security person, you could upskill a product engineer or product manager because they know how the product works but may be missing the security mindset,” she says. “It’s important to identify those who are ready for a new challenge, identify their transferrable skills, and create career paths to retain and advance your best people instead of hiring from outside.” ... While upskilling and certifying existing employees would help the organization retain talented people who already know the company, Diedre Diamond, founding CEO of cyber talent search company CyberSN, cautions against moving skilled workers to entry-level roles in security that don’t pay what the employees are used to earning. Upskilling financial analysts into compliance, either as a cyber risk analyst or GRC analyst will require higher-level certifications, but the pay for those upskilled positions may be more equitable for those higher-paid employees, she adds.


Data Engineering in Microsoft Fabric: An Overview

Fabric makes it quick and easy to connect to Azure Data Services, as well as other cloud-based platforms and on-premises data sources, for streamlined data ingestion. You can quickly build insights for your organization using more than 200 native connectors. These connectors are integrated into the Fabric pipeline and utilize the user-friendly drag-and-drop data transformation with dataflow. Fabric standardizes on Delta Lake format. Which means all the Fabric engines can access and manipulate the same dataset stored in OneLake without duplicating data. This storage system provides the flexibility to build lakehouses using a medallion architecture or a data mesh, depending on your organizational requirement. You can choose between a low-code or no-code experience for data transformation, utilizing either pipelines/dataflows or notebook/Spark for a code-first experience. Power BI can consume data from the Lakehouse for reporting and visualization. Each Lakehouse has a built-in TDS/SQL endpoint, for easy connectivity and querying of data in the Lakehouse tables from other reporting tools.



Quote for the day:

"Be willing to make decisions. That's the most important quality in a good leader." -- General George S. Patton, Jr.

Daily Tech Digest - May 18, 2023

Security breaches push digital trust to the fore

Digital trust needs to be integrated within the organization and isn’t necessarily owned by a single department or job title. Even so, cybersecurity, and the CISO, have an important role to play, according to the World Economic Forum’s 2022 Earning Digital Trust report, in protecting interconnectivity that support business, livelihoods of people and society generally as people’s reliance on digital interactions grows. As governments and regulators implement stricter requirements for ensuring data privacy and security, CISOs face a renewed need to prioritize digital trust or risk fines, lawsuits, significant brand damage and revenue loss to the organization. Thomas suggests that for CISOs digital trust could become the measurable metrics and outcome of security initiatives. “Organizations are not only secure to be compliant and protect information. The outcome of this is the trust that customers have, and that is what's going to change the way we measure how well security is being implemented,” he says. “If you want to ensure your customers trust you, you need to look at it as an organizational goal, or have it as a part of the strategy. ...”


Preparing the Mindset for Change: Five Roadblocks That Lead Digital Transformation to Failure

The absence of effective advocacy may have significantly contributed to the failure of many digital transformation progress. However, it is the responsibility of the stakeholders to be the advocates of the change. The goal to change cannot be just a business decision it needs to be believed in. A business that is generational, often sees the founders married to legacy processes, they find it difficult to break the norm and adapt to automation irrespective of disparate systems restricting the growth and scale. ... A lack of strategic planning before and after implementation can lead to severe consequences for an organization. Conflicting priorities can arise, and critical objectives may not be effectively communicated or achieved due to a disconnect between business and technology plans.
Unfortunately, many organizations fail to recognize the importance of pre-and post-implementation planning and instead focus solely on the implementation process. This shortsighted approach can lead to poor customer and stakeholder engagement, as well as employee dissatisfaction. 


Don't overlook attack surface management

Let’s look at three aspects of ASM that you should consider today: ... Visibility and discovery. Attack surface management should provide a comprehensive view of the cloud environment, allowing organizations to identify potential security weaknesses and blind spots. It helps uncover unknown assets, unauthorized services, and overlooked configurations, offering a clearer picture of potential entry points for attackers. ... Risk assessment and prioritization. By understanding the scope and impact of vulnerabilities, organizations can assess the associated risks and prioritize them. Attack surface management empowers businesses to allocate resources efficiently, focusing on high-risk areas that could have severe consequences if compromised. ... Remediation and incident response. When vulnerabilities are detected, ASM management provides the necessary insights to remediate them promptly. It facilitates incident response by helping organizations take immediate action, such as applying patches, updating configurations, or isolating compromised resources.


One on One with Automated Software Testing Expert Phil Japikse

A common misconception is that creating automated testing increases the delivery time. There was a study done at Microsoft some years ago that looked at different teams. Some were using a test-first strategy, some were using a test-eventual strategy, and some groups were using traditional QA departments for their testing. Although the cycle time was slightly higher for those doing automated testing, the throughput was much higher. This was because the quality of their work was much higher, and they had much less rework. We all know it’s more interesting to work on new features and tedious and boring to fix bugs. If you aren’t including at least some automated testing in your development process, you are going to spend more time fixing bugs and less time building new features. ... The more complex or important the system is, the more testing it needs. Software that controls airplanes, for example, must be extremely well tested. One could argue that game software doesn’t need as much testing. It all depends on the business requirements for the application.


The Work Habits That Are Blocking Your Ideas, Dreams and Breakthrough Success

A reactive mind prevents us from responding productively to the moment. Any time we are reactive, because we are not effectively relating to ourselves in the moment, we cannot be present with others. Those who have been tasked with carrying out our objectives can sense our lack of clarity and misalignment. They may perceive us as "confused," for instance, and then our reactivity triggers their self-protective belief structures. Miscommunication becomes the norm when a reactive individual is leading a team. ... Your colleague's negativity is not only self-destructive; it is also destructive to the organization and the morale of their co-workers. But your own disconnection from the truth of the moment is also destructive. By prejudging a colleague, you are missing out on the opportunity to positively interact with them or influence their behavior, and both of these things matter. A healthy yet skeptical outlook is helpful. Would you want a contract written by your lawyer that only foresaw favorable outcomes? The invitation is to transform negativity into a healthy dynamic so that co-creativity and joy are both possible. You need to be open to the possibilities that each of us possesses.


Dialectic Thinking: The Secret to Exceptional Mindful Leadership

The paradox of acceptance and change may very well be the toughest one we grapple with. Whether this is in our own meditation practice and self-development, or leading an organization it’s vital to take a dialectic approach. For genuine change to occur, there must first be acceptance of the current state. This acceptance forms the bedrock of reality, a foundation that is crucial for creating meaningful change. It's a truth that can't be obscured or sugarcoated. With acceptance, there's an opportunity to see things as they are and then to envisage something different. However, we can often misconstrue acceptance as passivity or complacency. It can be seen as an excuse to “do nothing”, to shy away from bold action, or to remain comfortably entrenched in the status quo. On the flip side, a relentless push for change can create a sense of perpetual dissatisfaction, hindering our ability to appreciate what already is. This can also foster a short-term, transactional mindset, particularly in relationships.


How to explain data meshes, fabrics, and clouds

“A data mesh is a decentralized approach to managing data, where multiple teams within a company are responsible for their own data, promoting collaboration and flexibility,” he said. There are no complex words in this definition, and it introduces the problems data meshes aim to solve, the type of solution, and why it’s important. Expect to be asked for more technical details, though, especially if the executive has prior knowledge of other data management technologies. For example, “Weren't data warehouses and data lakes supposed to solve the data management issue?” This question can be a trap if you answer it with the technical differences between data warehouses, lakes, and meshes. Instead, focus your response on the business objective. Satish Jayanthi, co-founder and CTO of Coalesce, offers this suggestion: “Data quality often affects the accuracy of business analytics and decision-making. By implementing data mesh paradigms, the quality and accuracy of data can be enhanced, resulting in increased trust among businesses to utilize data more extensively for informed decision-making.”


Has the Cloud Forever Changed Disaster Recovery?

For today’s organisations, resilience is paramount to a successful data protection plan, mentioned Lawrence Yeo, Enterprise Solutions Director, ASEAN, Hitachi Vantara. Being resilient entails having the flexibility to quickly restore data and applications to both existing and new cloud accounts. We believe that traditional backup and disaster recovery systems focused on data centres are becoming outdated. Instead, we need a data protection strategy that prioritises IT resilience and can protect data anywhere, including public clouds and SaaS applications. Resilience is the key to a robust data protection strategy as a slow disaster recovery or data restoration can negatively impact business processes. To be resilient, you need a data protection solution that encompasses backup and disaster recovery across on-premises and public clouds, allowing you to restore data and applications quickly, either to existing or new cloud accounts.


IOT Sensors - Sensing the danger

How can an operator establish integrity and accuracy within a sensor and mitigate potential vulnerabilities? This is where Root of Trust (RoT) hardware plays a crucial role. Hardware such as a Device Identifier Composition Engine (DICE) can supply a unique security key to each firmware layer found in a sensor or connected device. ... Should an attack on your systems be successful, and a layer become exposed, the unique key accessed by a hacker cannot be used to breach further elements. This can help reduce the risk of a significant data breach and enables operators to trust the devices they utilise in a network. A device can also easily be re-keyed should any unauthorised amendments be discovered within the sensor’s firmware, enabling users to quickly identify vulnerabilities throughout the system’s update process. For organisations with smaller devices and an even smaller budget, specifications such as the Measurement and Attestation Roots (MARS) can be deployed to instil the necessary capabilities of identity, measurement storage, and reporting in a more cost-effective manner.


Data hoarding is bad for business and the environment

The findings suggest young consumers are unaware of the impact of their own carbon footprint. From the report, 44% said it’s wrong for businesses to waste energy and cause pollution by storing unneeded information online. ... The fallout? The Veritas study found that 47% of consumers would stop buying from a company if they knew it was willfully causing environmental damage by failing to control how much unnecessary data it was storing. Meanwhile, 49% of consumers think it’s the responsibility of the organizations that store their information to delete it when it’s no longer needed, the report said. ... It is incumbent upon leaders to pay attention to this issue. Srinivasan cautioned that organizations should not underestimate the environmental impact of poor data management practices – even if they are outsourcing their storage to public cloud providers. Some good data management practices would be to make consumers aware of the costs of all this data, especially the negative externalities on our overheating planet.



Quote for the day:

"Management is about arranging and telling. Leadership is about nurturing and enhancing." -- Tom Peters