Daily Tech Digest - June 08, 2023

5 Reasons Why IT Security Tools Don't Work For OT

While IT and OT both seek to ensure confidentiality (the protection of sensitive data and assets), integrity (the fidelity of data over its lifecycle), and availability (the accessibility and responsiveness of resources and infrastructure), they prioritize different pieces of this CIA triad.IT's highest priority is confidentiality. IT deals in data, and the stakeholders of IT concern themselves with protecting that data — from trade secrets to the personal information of users and customers. OT's highest priority is availability. OT processes operate heavy-duty equipment in the physical realm, and for them, availability means safety. Downtime is simply untenable when shutting off a blast furnace or industrial boiler tank. For the sake of availability and responsiveness, most OT components weren't built to accommodate security implementations at all. ... Almost all IT-based tools require downtime for installation, updates, and patching. These activities are generally a non-starter for industrial environments, no matter how significant a vulnerability may be. Again, downtime for OT systems means putting safety at risk.


Oshkosh CIO Anu Khare on IT’s pursuit of value

VSP stands for value, strategic fit, and passionate sponsor. The framework ties to my fundamental philosophy of letting cost, value, and the customer decide what is valuable and what is not valuable for our customers. We didn’t start with VSP, but it evolved as a guiding framework, as we looked at our portfolio enablement process and asked ourselves, what’s the simplest way to approach project portfolio management? First, we decided to focus on the value. We started working with the business sponsors to articulate where and what impact the technology will have on the business. We then validate with finance, and if it has a hard savings, it gets No. 1 priority in terms of investment. The relentless focus on value also leads to the second point, which is strategic fit. The project may be valuable, but in any organization, the list of things the organization can do is always bigger than what the organization can afford or should afford. This is a capital allocation discussion? So we focus on the strategic fit. 


Cisco spotlights generative AI in security, collaboration

Security and IT administrators will be able to describe granular security policies and the assistant willl evaluate how to best implement them across different aspects of their security infrastructure, Patel said. At the Live! event, Cisco demoed how a generative Cisco Policy Assistant can reason with the existing set of firewall policy rules to implement and simplify them within the Cisco Secure Firewall Management Center. Cisco says it is the first of many examples of how generative AI can reimagine policy management across the Cisco Security Cloud. ... In addition, he said the security assistant will let customers describe and contextualize events across email, the web, endpoints, and the network to tell security operation center (SOC) analyst exactly what happened, the impact, and best next steps to take to remediate problems and set new policies. The SOC Assistant will provide a comprehensive situation analysis for analysts, correlating intel across the Cisco Security Cloud, relaying potential impacts, and providing recommended actions with the goal of reducing the time needed for SOC teams to respond to potential threats, he said.


How WASM (and Rust) Unlocks the Mysteries of Quantum Computing

Rather than picking from fixed specs, quantum programming can require you to define the setup of your quantum hardware, describing the quantum circuit that will be formed by the qubits and as well as the algorithm that will run on it — and error-correcting the qubits while the job is running — with a language like OpenQASM; that’s rather like controlling an FPGA with a hardware description language like Verilog. You can’t measure a qubit to check for errors directly while it’s working or you’d end the computation too soon, but you can measure an extra qubit and extrapolate the state of the working qubit from that. What you get is a pattern of measurements called a syndrome. In medicine, a syndrome is a pattern of symptoms used to diagnose a complicated medical condition like fibromyalgia. In quantum computing, you have to “diagnose” or decode qubit errors from the pattern of measurements, using an algorithm that can also decide what needs to be done to reverse the errors and stop the quantum information in the qubits from decohering before the quantum computer finishes running the program.


Energy security needs a secure IoT

The IoT has a central role to play as governments and industries work to reduce dependence on fossil fuels, establish new forms of energy generation and implement sufficient means of storing, managing and distributing energy. ... IoT connected devices and systems can contribute carbon tracking and smart-meter energy monitoring; they can enable data exchange for microgrids and support mechanisms for selling energy directly back into the network. These solutions will transmit data so that energy companies can monitor devices and conditions, control devices in remote locations, track performance to predict maintenance cycles and act on alerts. They will be able to monitor energy consumption for smart metering through connected meters and sensors for load balancing on the grid. In this way, connectivity is part of the intelligent, efficient, renewable energy model, however it must be cybersecure. As new and additional devices are deployed, they could present more pathways for potential cyberattacks. That is a significant risk and safeguards are therefore needed to protect against unauthorised access to devices, networks, management platforms and cloud infrastructure. 


How to Get Unstuck From Stress and Find Solutions Inside Yourself

The balance of sympathetic and parasympathetic states is critical both for our well-being and for the cultivation of presence. Neither state is superior to the other. They are opposite and equal in their importance. Both are needed to dynamically maintain the homeostasis of the body. (Remember, a state of polarity is the ability to go from one state to the other in alternation, as needed.) As with any ecosystem, complementary forces are necessary to preserve harmony. The trouble is that our regular thinking and doing in the world of business are sympathetically activating. It is not possible to use only the mind to become relaxed and restore balance to the nervous system. We need to counterbalance our SNS (sympathetic nervous system) activation through feeling and being. This is a whole new mode that many high-powered leaders are less familiar with and may not entirely trust. The good news, however, is that when we are in a relaxed, parasympathetic state, we can access the capabilities of our higher intelligence that we need for presence and collaboration, such as visualization and spontaneous generative creativity.


Daily Standups May Not Improve Your Team’s Agility

To make sure every team member gets the support they need, I highly recommend having at least once per week a longer team meeting, something we call “team time”. This meeting should be 30–45 min long and ensure there is enough time to really get to the bottom of a problem and find a solution. Every team member can propose a topic and the team discusses it together. If there are no challenges to discuss, this is also a great forum for other ways of knowledge share. When you are summing up these costs, you will be in a similar or even more expensive range than daily standups, but those meetings are actually helpful since they allow the team to solve problems and share knowledge and, with that, replace other meetings and make work more efficient. The social aspect is something that is rarely stated as a need for daily standups. But, for me, this is a misconception. A healthy and social team will always be an efficient team. Developing a proper team atmosphere and spirit should be key and in the interest of everyone. 


Everything Is Connected: Five IoT Trends Moving Forward

In what sounds like old news at this point, cybersecurity will continue to be at the forefront of business decision making. What is different this year is the rise of artificial intelligence (AI) and ML. AI and ML are making malicious actors more efficient and potentially more effective when carrying out attacks. Natural Language Models such as ChatGPT have opened new directions of attack as well as lowering the overall threshold for creating effective malicious code. Additionally, the changing legislative landscape around privacy will spur companies to take a hard look at the way that they collect, use, and retain sensitive personal data. This may require a complete redesign of products, procedures, or in fact, entire business models. ... Finally, it is no secret that the tech labor market is in a state of upheaval. Many companies are reducing or restricting their workforces as they seek efficiency or profits. This exodus of talented tech professionals has created severe knowledge gaps that must be addressed.


API Management Is a Commodity: What’s Next?

As API management software unbundles the gateway and adapts to the multi-gateway world, new and emerging software vendors are looking to fill the resulting requirement gaps for API design and development, security, analytics, portals, and marketplaces. Alex Walling, field CTO for Rapid, sees that developers need a layer of abstraction on top of their existing API gateways, such as those from WSO2, Kong, and Apigee so that they can find APIs easily and check whether someone has already developed an API for what they need. Moreover, Derric Gilling, CEO of Moesif, said he believes that API Gateways will become just one of the specialized pieces of the API stack developers and organizations will need to assemble to meet the growing adoption of APIs. He sees business models for APIs evolving beyond simply charging for API invocation counts, and the need for a specialized analytics solution to keep pace. Along with the continued explosion of interest in APIs, especially as organizations use more third-party APIs, the development and testing process becomes more complex and time-consuming.


AI: Interpreting regulation and implementing good practice

Emerging standards, guidance and regulation for AI are being created worldwide, and it will be important to align this and create a common understanding for producers and consumers. Organizations such as ETSI, ENISA, ISO and NIST are creating helpful cross-referenced frameworks for us to follow, and regional regulators, such as the EU, are considering how to penalize bad practices. In addition to being consistent, however, the principles of regulation should be flexible, both to cater for the speed of technological development and to enable businesses to apply appropriate requirements to their capabilities and risk profile. An experimental mindset, as demonstrated by the Singapore Land Transport Authority’s testing of autonomous vehicles, can allow academia, industry and regulators to develop appropriate measures. These fields need to come together now to explore AI systems’ safe use and development. Cooperation, rather than competition, will enable safer use of this technology more quickly.



Quote for the day:

"Men who are in earnest are not afraid of consequences." -- Marcus Garvey

Daily Tech Digest - June 07, 2023

The Design Patterns for Distributed Systems Handbook

Some people mistake distributed systems for microservices. And it's true – microservices are a distributed system. But distributed systems do not always follow the microservice architecture. So with that in mind, let's come up with a proper definition for distributed systems: A distributed system is a computing environment in which various components are spread across multiple computers (or other computing devices) on a network. ... If you decide that you do need a distributed system, then there are some common challenges you will face:Heterogeneity – Distributed systems allow us to use a wide range of different technologies. The problem lies in how we keep consistent communication between all the different services. Thus it is important to have common standards agreed upon and adopted to streamline the process. Scalability – Scaling is no easy task. There are many factors to keep in mind such as size, geography, and administration. There are many edge cases, each with their own pros and cons. Openness – Distributed systems are considered open if they can be extended and redeveloped.


Shadow IT is increasing and so are the associated security risks

Gartner found that business technologists, those business unit employees who create and bring in new technologies, are 1.8 times more likely than other employees to behave insecurely across all behaviors. “Cloud has made it very easy for everyone to get the tools they want but the really bad thing is there is no security review, so it’s creating an extraordinary risk to most businesses, and many don’t even know it’s happening,” says Candy Alexander, CISO at NeuEon and president of Information Systems Security Association (ISSA) International. To minimize the risks of shadow IT, CISOs need to first understand the scope of the situation within their enterprise. “You have to be aware of how much it has spread in your company,” says Pierre-Martin Tardif, a cybersecurity professor at Université de Sherbrooke and a member of the Emerging Trends Working Group with the professional IT governance association ISACA. Technologies such as SaaS management tools, data loss prevention solutions, and scanning capabilities all help identify unsanctioned applications and devices within the enterprise.


Worker v bot: Humans are winning for now

Ethical and legislative concerns aside, what the average worker wants to know is if they’ll still have a job in a few years’ time. It’s not a new concern: in fact, jobs are lost to technological advancements all the time. A century ago, most of the world’s population was employed in farming, for example. Professional services company Accenture asserts that 40% of all working hours could be impacted by generative AI tools — primarily because language tasks already account for just under two thirds of the total time employees work. In The World Economic Forum’s (WEF) Future of Jobs Report 2023, jobs such as clerical or secretarial roles, including bank tellers and data entry clerks, are reported as likely to decline. Some legal roles, like paralegals and legal assistants, may also be affected, according to a recent Goldman Sachs report. ... Customer service roles are also increasingly being replaced by chatbots. While chatbots can be helpful in automating customer service scenarios, not everyone is convinced. Sales-as-a-Service company Feel offers, among other services, actual live sales reps to chat with online shoppers.


The Future of Continuous Testing in CI/CD

Continuous testing is rapidly evolving to meet the needs of modern software development practices, with new trends emerging to address the challenges development teams face. Three key trends currently gaining traction in continuous testing are cloud-based testing, shift-left testing and security testing. These trends are driven by the need to increase efficiency and speed in software development while ensuring the highest quality and security levels. Let’s take a closer look at these trends. Cloud-Based Testing: Continuous testing is deployed through cloud-based computing, which provides multiple benefits like ease of deployment, mobile accessibility and quick setup time. Businesses are now adopting cloud-based services due to their availability, flexibility and cost-effectiveness. Cloud-based testing doesn’t require coding skills or setup time, which makes it a popular choice for businesses. ... Shift-Left Testing: Shift-left testing is software testing that involves testing earlier in the development cycle rather than waiting until later stages, such as system or acceptance testing.


IT is driving new enterprise sustainability efforts

There’s an additional sustainability benefit to modernizing applications, says Patel at Capgemini. “Certain applications are written in a way that consumes more energy.” Digital assessments can help measure the carbon footprint of internally developed apps, she says. Modern application design is key to using the cloud efficiently. At Choice Hotels, many components now run as services that can be configured to automatically shut down during off hours. “Some run as micro processes when called. We’re using serverless technologies and spot instances in the AWS world, which are more efficient, and we’re building systems that can handle it when those disappear,” Kirkland says. “Every digital interaction has a carbon price, so figure out how to streamline that,” advises Patel. This includes business process reengineering, as well as addressing data storage and retention policies. For example, Capgemini engages employees in sustainable IT by holding regular “digital cleaning days” that include deleting or archiving email messages and cleaning up collaborative workspaces.


SRE vs. DevOps? Successful Platform Engineering Needs Both

The complexity of managing today’s cloud native applications drains DevOps teams. Building and operating modern applications requires significant amounts of infrastructure and an entire portfolio of diverse tools. When individual developers or teams choose to use different tools and processes to work on an application, this tooling inconsistency and incompatibility causes delays and errors. To overcome this, platform engineering teams provide a standardized set of tools and infrastructure that all project developers can use to build and deploy the app more easily. Additionally, scaling applications is difficult and time-consuming, especially when traffic and usage patterns change over time. Platform engineering teams address this with their golden paths — or environments designed to scale quickly and easily — and logical application configuration. Platform engineering also helps with reliability. Development teams that use a set of shared tools and infrastructure tested for interoperability and designed for reliability and availability make more reliable software.


Zero Trust Model: The Best Way to Build a Robust Data Backup Strategy

A zero trust model changes your primary security principle from the age-old axiom “trust but verify” to “never trust; always verify.” Zero trust is a security concept that assumes any user, device, or application seeking access to a network is not to be automatically trusted, even if it is within the network perimeter. Instead, zero trust requires verification of every request for access, using a variety of security technologies and techniques such as multifactor authentication (MFA), least-privilege access, and continuous monitoring. A zero trust environment provides many benefits, though it is not without its flaws. Trust brokers are the central component of zero trust architecture. They authenticate users’ credentials and provide access to all other applications and services, which means they have the potential to become a single point of failure. Additionally, some multifactor authentication processes might cause users to wait a few minutes before allowing them to login, which can hinder employee productivity. The location of trust brokers can also create latency issues for users. 


How to Manage Data as a Product

The way most organizations go about managing data is out of step with the way people want to use data, says Wim Stoop, senior director of product marketing at Cloudera. “If you want to get your teeth fixed or your appendix out you go to an expert rather than a generalist,” he says. “The same should apply to the data that people in organizations need.” However, most enterprises treat data as a centralized and protected asset. It’s locked up in production applications, data warehouses, and data lakes that are administered by a small cadre of technical specialists. Access is tightly controlled, and few people are aware of data the organization possesses outside of their immediate purview. The drive towards organization agility has helped fuel interest in the data mesh. “Individual teams that are responsible for data can iterate faster in a well-defined construct,” Stoop says. “The shift to treating data as a product breaks down siloes and gives data longevity because it’s clearly defined, supported and maintained by the employees that know it intimately.”


Preparing for the Worst: Essential IT Crisis Preparation Steps

Crisis preparation begins with planning -- outlining the steps that must be taken in the event of a crisis, as well as procedures for data backup and recovery, network security, communication with stakeholders, and employee safety, says O’Brien, who founded the founded the Yale Law School Privacy Lab. “Every organization should conduct regular drills and simulations to test the effectiveness of their plan,” he adds. Every enterprise should appoint an overall crisis management coordinator, an individual responsible for ensuring that there’s a coordinated, updated, and rehearsed crisis management plan, Glair advises. He also recommends creating a crisis management chain of authority that’s ready to jump into action as soon as a crisis event occurs. The crisis management coordinator may report directly to any of several enterprise departments, including risk management, legal, operations, or even the CIO or CFO. “The reporting location is not as important as the authority the coordinator is granted to prepare and manage the crisis management strategy,” he says.


How to make developers love security

Developers hate being slowed down or interrupted. Unfortunately, legacy security testing systems often have long feedback loops that negatively impact developer velocity. Whether it’s complex automated scans or asking the security team to complete manual reviews, these activities are a source of friction. They increase the delay between making a change and verifying its effect. Security suites with many different tools can result in context switching and multi-step mitigations. Additionally, tools aren’t always equipped to find problems in older code, either. Only scanning the new changes in your pipeline maximizes performance, but this can allow oversights to occur as more vulnerabilities become known. Similarly, developers have to refamiliarize themselves with old work whenever a vulnerability impacts it. This is a cognitive burden that further increases the fix’s overall time and effort. All too often, these problems add up to an inefficient security model that prevents timely patches and consumes developers’ productive hours. 



Quote for the day:

"Incompetence annoys me. Overconfidence terrifies me." -- Malcolm Gladwell

Daily Tech Digest - June 06, 2023

CISOs, IT lack confidence in executives’ cyber-defense knowledge

CISOs need to understand precisely how and where the two risk environments — corporate and personal — intersect to get ahead of this problem. Here are four things to work on to ensure key executives are protected outside the office environment.Be vigilant for changes in leadership and executive team risk profiles. These blind spots can be a CEO who makes frequent media appearances, has stock market dealings that are open to public scrutiny, or is simply well enough known to be included in social media conversations. Identify the company’s “crown jewels” that need to be protected. This needs to include an evaluation of potential risks, including through personal attack, and developing mitigation strategies. Ensure high-level executives get cybersecurity training. All staff should attend tailored awareness training which includes phishing simulation exercises and tabletop exercises, C-level and board executives included. Shared responsibilities. CISOs should work with other high-level executives that shared responsibility is being carried across, this means understanding shared risk.


Cyber spotlight falls on boardroom ‘privilege’ as incidents soar

“With the growth and increasing sophistication of social engineering, organisations must enhance the protection of their senior leadership now to avoid expensive system intrusions,” added Novak. “When you look at the grand scheme of social engineering, the reason we see this increasing is because it’s a relatively easy thing for a threat actor to throw out there and try to hit a lot of organisations with,” Novak told reporters during a pre-briefing session attended by Computer Weekly. “This ties back to being financially motivated – most of these events are about fraudulent movement of money and, typically, that results in them getting paid very quickly.” ... “Globally, cyber threat actors continue their relentless efforts to acquire sensitive consumer and business data. The revenue generated from that information is staggering, and it’s not lost on business leaders, as it is front and centre at the board level,” said IDC research vice-president Craig Robinson. The research team added that the fact many organisations continue to rely on distributed workforces added to the challenges faced by defenders in creating and, crucially, enforcing human-centric security best practice.


Will companies use low code to run their businesses?

Today's low code platforms typically provide a visual, drag-and-drop interface for building form-based applications, or tools to build a visual workflow. The resulting apps can be used to automate business processes, create mobile apps, and integrate with other systems. The aim of low code technology is to make application development much more accessible and efficient, so that organizations can better respond to changing business needs and stay competitive. I've seen a lot of other benefits in my discussions with CIOs, for whom low code was certainly not a topic that rose to their pay grade until the last couple of years. Now it's clear that low code can reduce dependencies on hard-to-find development talent, lower the cost of development while speeding it up, and reduce backlogs. ... Low code is becoming a central part of the future of IT, and there are now increasing proof points to show that low code adoption can successfully happen in a substantial, even comprehensive way in both IT and the business.


5 Must-Know Facts about 5G Network Security and Its Cloud Benefits

With its low latency, higher bandwidth, and extensive security measures, 5G strengthens the security of cloud connectivity. This upgrade enables secure and reliable transmission of sensitive information as well as real-time data processing. 5G allows organizations to confidently use cloud services to store and manage their data, reducing the risk of data breaches. 5G offers superior fault tolerance when compared to cable connections, primarily due to the inherent resilience of wireless channels in mitigating communication failures. With a cable connecting an office or factory to a provider, it might be necessary to build a backup connection through an optical fiber or radio. But 5G has a reserved channel from the outset. If one base station fails, others will take over automatically, making downtime unlikely. In addition, 5G network slicing capabilities provide companies with dedicated virtual networks within their IT system. This enables better isolation and segregation of data, applications, and services, improving overall security.


Private 5G might just make you rethink your wireless options

“Cal Poly is a data-laden environment where, to unlock the true value of that data, the data must constantly move to where it is needed,” said Bill Britton, Cal Poly’s vice president for IT services and CIO. Unfortunately, the university’s legacy Wi-Fi networks were straining under the weight of that data. Before investigating 5G options, Cal Poly’s IT team audited their networks to see how, where, and why data overloaded existing networks. They tracked usage down to the component level and found things like a single Xbox downloading close to 2 terabytes of data, as a single student’s console served as a gaming hub for more than 1,500 other people worldwide, all gobbling up Cal Poly bandwidth. “What happens if an Xbox is consuming that much bandwidth during registration or final exams?” Britton asked. “There’s a myth that you can just add more bandwidth, but with Wi-Fi, the infrastructure itself will always be the major limiting factor,” he said. Without costly traffic management add-ons, legacy Wi-Fi has severe limitations, including issues with hand-offs, interference, and the insufficient roaming capabilities.


How to Boost Cybersecurity Through Better Communication

Cybersecurity feels like war. And that naturally leads to cybersecurity staff forming a combative mindset. Tasked with securing a massive and growing cybersecurity attack surface, constantly evolving threat landscape, vulnerability-prone software, insider threats, new and unprecedented challenges (like the recent shift to remote work), limited budgets, a persistent skills shortage and general understaffing and other constraints — users just seem like another set of problems coming at you. ... The larger conversation between cybersecurity staff and employees feels like the security pros have one set of objectives (preventing and dealing with cyberattacks) that feel at odds with the objectives of everyone else in the organization (winning customers, earning profits, achieving growth goals, minimizing customer loss and many others). The big picture is that the larger goals of the organization are shared goals. All those business objectives depend on cybersecurity — security is part of what makes them possible. By focusing on shared objectives, users will partner more readily.


4 Big Regulatory Issues To Ponder in 2023

Ensuring regulatory compliance can feel like a delicate juggling act. Large enterprises with operations in multiple states and countries are faced with a patchwork of laws that are evolving in an attempt to keep up with today’s proliferation of data and technology. “It’s challenging to stay on top of what seems to be a never-ending list of new requirements, some of which overlap but do not align,” Hodge says. Enterprises may not even have the necessary knowledge to understand where they stand with regulatory compliance. “Many companies don’t even know everywhere sensitive data resides in their technical stack. Companies that had to comply with GDPR or CCPA may have done proper data mapping, but most haven’t. This generally tends to be the most time- and resource-intensive,” according to Robin Andruss, chief privacy officer at data privacy company Skyflow. Budgetary and staffing constraints complicate that juggling act. Enterprises need technology, people, and training to keep up with compliance. Getting an adequate share of the budget for those resources can be particularly challenging for smaller companies.


Generative AI and the future of HR

Generative technology can actually pull on the skills that are required to be successful in the job. That’s not to say managers don’t need to check the end product. They’ll need to be that human in the loop to make sure the job requirement is a good one. But gen AI can dramatically improve speed and quality. The other application in recruiting is candidate personalization. Right now, if you’re an organization with tens of thousands of applicants, you may or may not have super customized ways of reaching out to the people who have applied. With generative AI, you can include much more personalization about the candidate, the job, and what other jobs may be available if there’s a reason the applicant isn’t a fit. All those things are made immensely easier and faster through generative AI. ... The best application of gen AI is in large skill pools where you’re trying to fill a reasonably well-known job. We need a more productive and efficient way to navigate all the profiles coming through. Where it makes me a little anxious is anytime it’s a novel job—a new role—or even, in US law, a job that’s changed more than 25 percent or 33 percent. 


How to move the needle on innovation

“You can’t talk about innovation without considering culture, but I view that in a very practical fashion: it’s got to be more than philosophy and ideology,” says Marchand. “Creating the right culture has to start at the top with an appreciation for and a dedication to innovation.” In considering the innovation-savvy leaders with whom she has worked, Marchand finds that they all have a passion for problem-solving, an insatiable sense of curiosity, and a willingness to embrace change. “They like to be involved in transformations and don’t mind a little bit of ambiguity,” she says. “They also have an appreciation for the fact that even though they’re there to support the shareholders, they’re going to enable innovation—new products, services, and ideas—to flourish.” Weaving innovation into the business. Enabling innovation includes devoting resources to innovation in an integrated manner. “One major pharma company created a little startup unit staffed by its ten best project managers and gave them [US]$20 million and 18 months to see what they could come up with,” recalls Marchand. 


If You Want to Deliver Fast, Your Tests Have the Last Word

We need to have something that doesn’t change, that feels safe and that frees our mind from the burden of thinking whether or not it actually fits. We enter autopilot mode. The problem with that is that we want software development to behave like an assembly line: once the assembly line is built, we never touch it. We operate in the same way all the time. That may work with our CI/CD lanes for a while, but sadly it doesn’t always work well with our code. It even gets worse because sometimes the message is transmitted so many times that it loses its essence and at some point, we take that practice as part of our identity, we defend it, and we don’t let different points of view in. ... We try to achieve this responsiveness with practices of different natures: technical, such as CI/CD (Continuous Integration/Continuous Deployment), and strategic, such as developing in iterations. However, we often forget about agility when we deal with the core of Software Development: coding. Imagine preparing your favorite meal or dessert without the main ingredient of the recipe.



Quote for the day:

"Rank does not confer privilege or give power. It imposes responsibility." -- Peter F. Drucker

Daily Tech Digest - June 05, 2023

How to create generative AI confidence for enterprise success

The key to enterprise-ready generative AI is in rigorously structuring data so that it provides proper context, which can then be leveraged to train highly refined large language models (LLMs). A well-choreographed balance between polished LLMs, actionable automation and select human checkpoints forms strong anti-hallucination frameworks that allow generative AI to deliver correct results that create real B2B enterprise value. ... The initial phase of any company’s system is the blank slate that ingests information tailored to a company and its specific goals. The middle phase is the heart of a well-engineered system, which includes rigorous LLM fine-tuning. OpenAI describes fine-tuning models as “a powerful technique to create a new model that’s specific to your use case.” This occurs by taking generative AI’s normal approach and training models on many more case-specific examples, thus achieving better results. In this phase, companies have a choice between using a mix of hard-coded automation and fine-tuned LLMs. 


Governments worldwide grapple with regulation to rein in AI dangers

Although a number of countries have begun to draft AI regulations, such efforts are hampered by the reality that lawmakers constantly have to play catchup to new technologies, trying to understand their risks and rewards. “If we refer back to most technological advancements, such as the internet or artificial intelligence, it’s like a double-edged sword, as you can use it for both lawful and unlawful purposes,” said Felipe Romero Moreno, a principal lecturer at the University of Hertfordshire’s Law School whose work focuses on legal issues and regulation of emerging technologies, including AI. AI systems may also do harm inadvertently, since humans who program them can be biased, and the data the programs are trained with may contain bias or inaccurate information. “We need artificial intelligence that has been trained with unbiased data,” Romero Moreno said. “Otherwise, decisions made by AI will be inaccurate as well as discriminatory.”


10 notable critical infrastructure cybersecurity initiatives in 2023

In April, a group of OT security companies that usually compete with one another announced they were setting aside their rivalries to collaborate on a new vendor-neutral, open-source, and anonymous OT threat warning system called ETHOS (Emerging Threat Open Sharing). Formed as a nonprofit, ETHOS aims to share data on early threat indicators and discover new and novel attacks threatening industrial organizations that run essential services, including electricity, water, oil and gas production, and manufacturing systems. It has already gained US CISA endorsement, a boost that could give the initiative greater traction. All organizations, including public and private asset owners, can contribute to ETHOS at no cost, and founders envisage it evolving along the lines of open-source software Linux. ETHOS community and board members include some of the top OT security companies 1898 & Co., ABS Group, Claroty, Dragos, Forescout, NetRise, Network Perception, Nozomi Networks, Schneider Electric, Tenable, and Waterfall Security.


UK has time limit on ensuring cryptocurrency regulatory leadership

The report also said that interest in digital assets among investors and the general public led to the conclusion that cryptocurrency is more than a fad and is here to stay, and that cross-government planning is required if the UK wants to take the opportunities it offers. These recommendations followed contributions from the crypto sector regulators, industry experts and the general public. The report said: “Other countries around the world are moving quickly to develop clear regulatory frameworks for cryptocurrency and digital assets. The UK must move within a finite window of opportunity within the next 12-18 months to ensure early leadership within this sector.” Scottish Nationalist Party MP and chair of the APPG, Lisa Cameron MP, said: “This is the first report of its kind compiled jointly involving Members of Parliament and the House of Lords and we are keen that it contributes to evidence-based policy development across the sector.


3 things CIOs must do now to accurately hit net-zero targets

One of the immediate efforts CIOs can take to accelerate sustainability goals includes selecting energy-efficient software, which can have a major impact on energy consumption. Uniting Technology and Sustainability surveyedcompanies that said they were taking various approaches to incorporate sustainability throughout the software development lifecycle. ... This opportunity to collaborate with sustainability in mind extends to the influence CIOs hold over where and how employees work. By integrating remote working capabilities, the CIO plays a hand in an organization’s shift to an increasingly remote or hybrid workforce model—a move that can significantly reduce a company’s carbon footprint. This effort has the potential to not only create sustainability at scale, but increase employee satisfaction, which will power a more sustainable organization. ... CEOs believe new technology will allow them to reach sustainability goals and build resilience, with 55% of CEOs enhancing sustainability data collection capabilities, and 48% transitioning to a cloud infrastructure.


Serverless is the future of PostgreSQL

Shamgunov sees two primary benefits to running PostgreSQL serverless. The first is that developers no longer need to worry about sizing. All the developer needs is a connection string to the database without worrying about size/scale. Neon takes care of that completely. The second benefit is consumption-based pricing, with the ability to scale down to zero (and pay zero). This ability to scale to zero is something that AWS doesn’t offer, according to Ampt CEO Jeremy Daly. Even when your app is sitting idle, you’re going to pay. But not with Neon. As Shamgunov stresses in our interview, “In the SQL world, making it truly serverless is very, very hard. There are shades of gray” in terms of how companies try to deliver that serverless promise of scaling to zero, but only Neon currently can do so, he says. Do people care? The answer is yes, he insists. “What we’ve learned so far is that people really care about manageability, and that’s where serverless is the obvious winner. [It makes] consumption so easy. All you need to manage is a connection stream.” 


Cloud conundrum: The changing balance of microservices and monolithic applications

Containers and microservices are great for applications that can put everything together in a single place, and make it easier for developers to run across many different platforms and computing equipment. Containers are also better at scaling up and down an application than starting and stopping a whole bunch of VMs, since they take fraction of seconds to bring up, versus minutes for a VM. But there are still tradeoffs. Here is one way to describe the situation: “The microservices architecture is more beneficial for complex and evolving applications. But if you have a small engineering team aiming to develop a simple and lightweight application, there is no need to implement them.” But it would be wise not to discount VMs entirely. They can be an important stepping stone from the on-premises world, as Southwire Co. LLC’s Chief Information Officer Dan Stuart told SiliconANGLE in a recent interview. “We had a lot of old technology in our data center and were already familiar with VMware, so that made the move to Google’s Cloud easier,” he said.


A Case for Event-Driven Architecture With Mediator Topology

The most straightforward cases for reliability involve the converter services. The service locks a message in the queue when it starts processing and deletes it when it has finished its work and sent the result. If the service crashes, the message will become available again in the queue after a short timeout and can be processed by another instance of the converter. If the load grows faster than new instances are added or there are problems with the infrastructure, messages accumulate in the queue. They will be processed right after the system stabilizes. In the case of the Mediator, all the heavy lifting is again done by the Workflow Core library. Because all running workflows and their state are stored in the database, if an abnormal termination of the service occurs, the workflows will continue execution from the last recorded state. Also, we have configurations to retry failed steps, timeouts, alternative scenarios, and limits on the maximum number of parallel workflows. What’s more, the entire system is idempotent, allowing every operation to be retried safely without side effects and mitigating the concern of duplicate messages being received.


GDPR — How does it impact AI?

It is no surprise that legislation has lagged behind the unprecedented rise of AI, but this is where leaning more on data protection regulation may help to fill an important gap in the meantime. Another factor that has completely altered the landscape in the past five years is the UK’s exit from the EU, which brought additional complexities for the effective monitoring of personal data; while ‘UK GDPR’ is largely the same as the EU version it does carry some slight differences which make it an imperative for companies to increase education around data usage to understand the new policy landscape and avoid running afoul of these differences. ... Looking ahead, although the landscape has undoubtedly become far more complex, I remain a firm believer that the GDPR and AI can still work successfully in tandem as long as rigorous measures, checks and best practices are embedded firmly into business strategies and on the proviso that AI-related policy also evolves as a way to supplement existing data regulations.


The metaverse: Not dead yet

“We are in a winter for the metaverse, and how long that chill lasts remains to be seen,” said J.P. Gownder, vice president and principal analyst on Forrester's Future of Work team. Late last year, the analyst firm predicted a drop-off in interest during 2023 as a more realistic picture of the technology’s current possibilities emerged. “The hype was way exceeding the reality of the capabilities of the technology, the interest from customers — both business and consumer — and just the overall maturity of the market.” Yet the metaverse concept isn’t going away. “We think that, in the future, something like the metaverse will exist, whereby we have a 3D experience layer over the internet,” said Gownder. Don’t expect this to happen any time soon, though: the development of the metaverse could take a decade, according to Forrester. ... As metaverse hype subsides, the underlying technologies continue to develop and evolve, on both the hardware and software front. ... “There continues to be steady development of metaverse-type concepts. But just like we saw with the march to autonomous vehicles, this takes a long time to mature and put into place,” Lightman said.



Quote for the day:

“Being a leader, at its core, is about how we show up each day to work with the people in our charge.” -- Claudio Toyama

Daily Tech Digest - June 04, 2023

Insider risk management: Where your program resides shapes its focus

Choi says that while the information security team is ultimately responsible for the proactive protection of an organization’s information and IP, most of the actual investigation into an incident is generally handled by the legal and HR teams, which require fact-based evidence supplied by the information security team. “The CIO/CISO team need to be able to supply facts and evidence in a consumable, easy-to-understand fashion and in the right format so their legal and HR counterparts can swiftly and accurately conduct their investigation.” ... Water flows downhill and so does messaging on topics that many consider ticklish, such as IRM programs. Payne noted that “few, if any CEOs wish to discuss their threat risk management programs as it projects negativity — i.e., ‘we don’t trust you’ and they prefer to have positive messaging.” Few CISOs enjoy having an IRM program under their remit as “who wants to monitor their colleagues?” Payne adds, “Whacking external threats is easy; when it’s your colleague it becomes more problematic.”


What is the medallion lakehouse architecture?

The medallion architecture describes a series of data layers that denote the quality of data stored in the lakehouse. Databricks recommends taking a multi-layered approach to building a single source of truth for enterprise data products. This architecture guarantees atomicity, consistency, isolation, and durability as data passes through multiple layers of validations and transformations before being stored in a layout optimized for efficient analytics. The terms bronze (raw), silver (validated), and gold (enriched) describe the quality of the data in each of these layers. It is important to note that this medallion architecture does not replace other dimensional modeling techniques. Schemas and tables within each layer can take on a variety of forms and degrees of normalization depending on the frequency and nature of data updates and the downstream use cases for the data. Organizations can leverage the Databricks Lakehouse to create and maintain validated datasets accessible throughout the company. 


AppSec ‘Worst Practices’ with Tanya Janca

Having reasonable service-level agreements is so important. When I work with enterprise clients, they already have tons of software that’s in production doing its thing, but they’re also building and updating new stuff. So I have two service-level agreements and one is the crap that was here when I got here and the other stuff is all the beautiful stuff we’re making now. So I’ll set up my tools so that you can have a low vulnerability, but if it’s medium or above, it’s not going to production if it’s new. But all the stuff that was there when I scanned for the first time, we’re going to do a slower service-level agreement. That way we can chip away at our technical debt. The first time I came up with parallel SLAs was when this team lead asked, “Am I going to get fired because we have a lot of technical debt, and it would literally take us a whole year just to do the updates from the little software compositiony thing you were doing.” “No one’s getting fired!” I said. So that’s how we came up with the parallel SLAs so we could pay legacy technical debt down slowly like a student loan versus handling new development like credit card debt that gets paid every single month. There’s no running a ticket on the credit card!


Revolutionizing the Nine Pillars of DevOps With AI-Engineered Tools

Leadership Practices: Leadership is vital to drive cultural changes, set vision and goals, encourage collaboration and ensure resources are allocated properly. Strong leadership fosters a successful DevOps environment by empowering teams and supporting innovation. AI can assist leaders in decision-making by analyzing large datasets to identify trends and predict outcomes, providing valuable insights to guide strategic planning. Collaborative Culture Practices: DevOps thrives in a culture of openness, transparency and shared responsibility. It’s about breaking down the silos that can exist between different teams and promoting effective communication and collaboration. AI-powered tools can improve collaboration through smart recommendations, fostering more effective communication and knowledge sharing. Design-for-DevOps Practices: This involves designing software in a way that supports the DevOps model. This can include aspects like microservices architecture, modular design and considering operability and deployability from the earliest stages of design.


The ethics of innovation in generative AI and the future of humanity

Humans answer questions based on our genetic makeup (nature), education, self-learning and observation (nurture). A machine like ChatGPT, on the other hand, has the world’s data at its fingertips. Just as human biases influence our responses, AI’s output is biased by the data used to train it. Because data is often comprehensive and contains many perspectives, the answer that generative AI delivers depends on how you ask the question. AI has access to trillions of terabytes of data, allowing users to “focus” their attention through prompt engineering or programming to make the output more precise. This is not a negative if the technology is used to suggest actions, but the reality is that generative AI can be used to make decisions that affect humans’ lives. ... We have entered a crucial phase in the regulatory process for generative AI, where applications like these must be considered in practice. There is no easy answer as we continue to research AI behavior and develop guidelines


7 CIO Nightmares And How Enterprise Architects Can Help

The deeper you dig into cyber security, the more you find. Do you know what data your business actually needs to secure? A mission-critical application might be dependent on a spreadsheet in an outdated system. That data may be protected under regulation, but supplied from a cloud-based application that's reliant on open-source coding, and so on. Every CIO needs to know the top-ten, mission-critical, crown jewel applications and data centers that their business cannot live without, and what their connections and dependencies are. Each needs to have a clear plan of action in case of a security breach. The Solution: Mapping your tech stack with an enterprise architecture management (EAM) tool allows you to see exactly how mission critical each application is. This equates one-to-one with how much you need to invest in cyber security for each area. You can also gain clarity on which application is dependent on which platform. Likewise, you can find where crucial data is stored and where it feeds to.


7 Stages of Application Testing: How to Automate for Continuous Security

Pen testing allows organizations to simulate an attack on their web application, identifying areas of weaknesses that could be exploited by a malicious attacker. When done correctly, pen testing is an effective way to detect and remediate security vulnerabilities before they can be exploited. ... Traditional pen testing delivery often takes weeks to set up and the results are point in time. With the rise of DevOps and cloud technology, traditional once-a-year pen testing is no longer sufficient to ensure continuous security. To protect against emerging threats and vulnerabilities, organizations need to execute ongoing assessments: continuous application pen testing. Pen Testing as a Service (PTaaS) offers a more efficient process for proactive and continuous security compared to traditional pen testing approaches. Organizations are able to access a view into to their vulnerability finding in real time, via a portal that displays all relevant data for parsing vulnerabilities and verify the effectiveness of a remediation as soon as vulnerabilities are discovered.


Technological Innovation Poses Potential Risk of Rising Agricultural Product Costs

While technology has undeniably improved farming practices, its implementation requires significant financial investment. The upfront costs associated with purchasing advanced machinery, upgrading infrastructure, and adopting new technologies can burden farmers, particularly smaller-scale operations. These costs can ultimately be passed on to consumers, potentially leading to an increase in the prices of agricultural products. The seductive promises of cutting-edge machinery, precision agriculture, and genetically modified crops have mesmerised farmers worldwide. It is true, these technological marvels have unleashed unprecedented efficiency, capable of revolutionising the way we grow and harvest our sustenance. Yet, in their wake, they leave a trail of exorbitant expenses, shaking the very foundation of the agricultural landscape. ... Modern farming equipment is often equipped with advanced technology and features that improve efficiency, precision, and productivity.


Open Source Jira Alternative, Plane, Lands

Indeed, “Plane is a simple, extensible, open source project and product management tool powered by AI. It allows users to start with a basic task-tracking tool and gradually adopt various project management frameworks like Agile, Waterfall, and many more, wrote Vihar Kurama, co-founder and COO of Plane, in a blog post. Yet, “Plane is still in its early days, not everything will be perfect yet, and hiccups may happen. Please let us know of any suggestions, ideas, or bugs that you encounter on our Discord or GitHub issues, and we will use your feedback to improve on our upcoming releases,” the description said. Plane is built using a carefully selected tech stack, comprising Next.js for the frontend and Django for the backend, Kurama said. “We utilize PostgreSQL as our primary database and Redis to manage background tasks,” he wrote in the post. “Additionally, our architecture includes two microservices, Gateway and Pilot. Gateway serves as a proxy server to our database, preventing the overloading of our primary server, while Pilot provides the interface for building integrations. ...”


Emerging AI Governance is an Opportunity for Business Leaders to Accelerate Innovation and Profitability

Firstly, regulation can help establish clear guidelines and standards for developing and deploying AI systems, for example, standards in accuracy, reliability, and risk management. Such guidelines can provide a stable and predictable framework for innovation, reducing uncertainty and risk in AI system development. This will increase participation in the field from developers and encourage greater investment from public and private organizations, thereby boosting the industry as a whole. ... Governments and governance organizations have a strong history of successfully investing in AI technologies and their inputs (e.g., Open Data Institute, Horizon Europe), as well as acting as demand side stimulators for long-term, high-risk innovations that are the foundations of many of the technologies we use today. Such examples include innovation at DARPA that formed the foundations of the Internet, or financial support to novel technologies through subsidy systems e.g., consumer solar panels.



Quote for the day:

"Try not to become a man of success but a man of value." -- Albert Einstein

Daily Tech Digest - June 03, 2023

Is it Possible to Calculate Technology Debt?

Perhaps we should rename it Architectural Debt or even Organisational Debt? From an Enterprise Architecture standpoint, we talk about “People, Processes, and Technology,” all of which contribute to the debt over time and form a more holistic view of the real debt. It does not matter what it is called as long as there is consistency within the organisation and it has been defined, agreed and communicated. ... The absence of master data management, quality, data lineage, and data validation all contribute to data debt. People debt is caused by having to support out-of-date assets (software and/or infrastructure), the resulting deskilling over time and missed opportunity to reskill which all potentially leads to employee attrition. Processes requiring modification can become dependent on technology due to the high cost of change, or the alternative of adjusting the design to accommodate poorly designed processes. While Robotic Process Automation (RPA) can provide a rapid solution in such cases, it raises the question of whether the automation simply perpetuates flawed processes without addressing the underlying issue. 


There Are Four Types of Data Observability. Which One is Right for You?

Business KPI Drifts: Since data observability tools monitor the data itself, they are often used to track business KPIs just as much as they track data quality drifts. For example, they can monitor the range of transaction amounts and notify where spikes or unusual values are detected. This autopilot system will show outliers in bad data and help increase trust in good data. Data Quality Rule Building: Data observability tools have automated pattern detection, advanced profiling, and time series capabilities and, therefore, can be used to discover and investigate quality issues in historical data to help build and shape the rules that should govern the data going forward. Observability for a Hybrid Data Ecosystem: Today, data stacks consist of data lakes, warehouses, streaming sources, structured, semi-structured, and unstructured data, API calls, and much more. ... Unlike metadata monitoring that is limited to sources with sufficient metadata and system logs – a property that streaming data or APIs don’t offer – data observability cuts through to the data itself and does not rely on these utilities.


Why Companies Should Consider Developing A Chief Security Officer Position

The combination of the top-down and cross-functional influence of the CSO with the technical reach of the CISO should be key to creating and maintaining the momentum required to deliver change and break business resistance where it happens. In my experience, firms looking to implement this type of CSO position should start looking internally for the right executive: Ultimately the role is all about trust, and your candidate should have intimate knowledge of how to navigate the internal workings of the organization. I would recommend looking for someone that is an ambitious leader—not someone at an end-of-career position. Additionally, consider assigning this role to a seasoned executive. Someone you believe is motivated overall by the protection of the business from active threats, able to take an elevated long-term view where required, over and above the short-term fluctuations of any business. Demonstrating leadership in a field as complex should be seen as an opportunity to showcase skills that can be applied elsewhere in the organization.


Threatening botnets can be created with little code experience, Akamai finds

According to the research the Dark Frost actor is selling the tool as DDoS-for-hire exploit and as a spamming tool. “This is not the first exploit by this actor,” said West, who noted that the attacker favors Discord to openly tout their wares and brag. “He was taking orders there, and even posting screenshots of their bank account, which may or may not be legitimate.” ... The Dark Frost botnet uses code from the infamous Mirai botnet, which West said was easy to obtain, and highly effective in exploiting hundreds of machines, and is therefore emblematic of how, with source code from previously successful malware strains and AI code generation, someone with minimal knowledge can launch botnets and malware. “The author of Mirai put out the source code for everyone to see, and I think that it started and encouraged the trend of other malware authors doing the same, or of security researchers publishing source code to get a bit of credibility,” said West.


Experts say stopping AI is not possible — or desirable

"These systems are not imputed with the capability to do all the things that they're now able to do. We didn’t program GPT-4 to write computer programs but it can do that, particularly when it’s combined with other capabilities like code interpreter and other programs and plugins. That’s exciting and a little daunting. We’re trying to get our hands wrapped around risk profiles of these systems. The risk profiles, which are evolving literally on a daily basis. “That doesn't mean it's all net risk. There are net benefits as well, including in the safety space. I think [AI safety research company] Anthropic is a really interesting example of that, where they are doing some really interesting safety testing work where they are asking a model to be less biased and at a certain size they found it will literally produce output that is less biased simply by asking it. So, I think we need to look at how we can leverage some of those emerging capabilities to manage the risk of these systems themselves as well as the risk of what’s net new from these emerging capabilities.”


How IT can balance local needs and global efficiency in a multipolar world

Technical architecture solutions, such as microservices, can help companies balance the level of local solution tailoring with the need to harness scale efficiencies. While not new, these solutions are more widely accepted and can be more easily realized in modern cloud platforms. These developments are enabling leading companies to evolve their operating models by building standardized, modular, and configurable solutions that maximize business flexibility and efficiency while making data management more transparent ... However useful these localization capabilities are, they will not work as needed unless local teams have sufficient autonomy (at some companies, local teams in China, for example, clear decisions through central headquarters, which is a major roadblock for pace and innovation). The best companies provide local teams with specific decision rights within guidelines and support them by providing necessary capabilities, such as IT talent embedded with local market teams to get customer feedback early.


Constructing the innovation mandate

We need to understand successful innovation actually touches all aspects of a business, by contributing to improving business processes, identifying new, often imaginative, ways to reduce costs, building out existing business models into new directions and value and discovering new ways and positioning into markets. To get to a consistent performance of innovation and creativity within organizations you do need to rely on a process, structure and the consistent ability to foster a culture of innovation. An innovation mandate is a critical tool for defining the scope and direction of innovation and the underlying values, commitment and resources placed behind it. Normally this innovation mandate comes in the form of a document, generally build up by a small team of senior leaders, innovation experts and subject matter experts. That group should possess a deep understanding of the existing organization’s strategy, business models, operations and culture and a wider appreciation of the innovation landscape, the “fields of opportunity” and the emerging practices of innovation management.


3 Unexpected Technology Duos That Will Supercharge Your Marketing

While geofencing isn't the newest technology to enter the marketing spectrum, it is improving exponentially day by day. Geofencing creates virtual geographic boundaries around targeted areas, and when someone crosses into one of those areas, it creates a triggered response — your ads will show up while they're browsing their favorite sites or checking their email. ... Website content can be a major trust builder for your businesses and therefore can play a vital part in turning an interested prospect into a buying customer. But many a business owner has cringed at the thought of writing copy for their website ... let alone regularly updating it with blog posts or e-newsletter articles. Creating large amounts of content can be a constant challenge for business owners, and I get it. You're already busy running a business! But what I want small business owners to realize is that they have access to many tools — some of them free — that will do 95% of the writing for you.


The Evolution of the Chief Privacy Officer

Given the natural overlap between privacy, security and the uses of data, strategic cooperation is key. “It’s about building a strategy together to develop an enterprise approach,” Jones said. “My role is to build privacy and transparency into every state system and application and business process at every stage of the life cycle.” Cotterill looks to Indiana’s IT org chart to help define the spheres of responsibility. The governor appoints the chief information officer and chief data officer, and the CISO and CPO report to each of them, respectively. “The CIO, and the CISO reporting to him, they’re focused on providing cost-effective, secure, consistent, reliable enterprise IT services and products,” he said. “For the CDO, with the CPO reporting to him … we have a threefold mission: to empower innovation, enable the use of open data, and do that all while maintaining data privacy.” IT provides “that secure foundation to do business,” while he and the CDO “are focused on the substantive use of data to drive decisions and improve outcomes,” he said.


Should Data Engineers be Domain Competent?

A traditional data engineer views a table with one million records as relational rows that must be crunched, transported and loaded to a different destination. In contrast, an application programmer approaches the same table as a set of member information or pending claims that impact life. The former is a pureplay, technical view, while the latter is more human-centric. These drastically differing lenses form the genesis of the data siloes ... When we advocate domain knowledge, let’s not relegate it to a few business analysts who are tasked to translate a set of high-level requirements into user stories. Rather domain knowledge implies that every data engineer gets a grip on the intrinsic understanding of how functionality flows and what it tries to accomplish. Of course, this is easier to preach than practice, as expecting a data team to understand thousands of tables and millions of rows is akin to expecting them to navigate a freeway in peak time on the reverse gear with blindfolds. It will be a disastrous. While its amply evident that data teams need domain knowledge, it’s hard to expect that centralized data teams will deliver efficient results. 



Quote for the day:

"Leaders are visionaries with a poorly developed sense of fear and no concept of the odds against them. " -- Robert Jarvik