Daily Tech Digest - February 17, 2024

Europe’s Digital Services Act applies in full from tomorrow - here’s what you need to know

In one early sign of potentially interesting times ahead, Ireland’s Coimisiún na Meán has recently been consulting on rules for video sharing platforms that could force them to switch off profiling-based content feeds by default in that local market. In that case the policy proposal was being made under EU audio visual rules, not the DSA, but given how many major platforms are located in Ireland the Coimisiún na Meán, as DSC, could spin up some interesting regulatory experiments if it take a similar approach when it comes to applying the DSA on the likes of Meta, TikTok, X and other tech giants. Another interesting question is how the DSA might be applied to fast-scaling generative AI tools. The viral rise of AI chatbots like OpenAI’s ChatGPT occurred after EU lawmakers had drafted and agreed the DSA. But the intent for the regulation was for it to be futureproofed and able to apply to new types of platforms and services as they arise. Asked about this, a Commission official said they have identified two different situations vis-à-vis generative AI tools: One where a VLOP is embedding this type of AI into an in-scope platform  — where they said the DSA does already apply. 


Composable Architectures vs. Microservices: Which Is Best?

Composable architecture is a modular approach to software design and development that builds flexible, reusable and adaptable software architecture. It entails breaking down extensive, monolithic platforms into small, specialized, reusable and independent components. This architectural pattern comprises a pluggable array of modular components, such as microservices, packaged business capability (PBC), headless architecture and API-first development that can be seamlessly replaced, assembled and configured to align with business requirements. In a composable application, each component is developed independently using the technologies best suited to the application’s functions and purpose. This enables businesses to build customized solutions that can swiftly adapt to business needs. ... The composable approach has gained significant popularity in e-commerce applications and web development for enhancing the digital experience for developers, customers and retailers, with industry leaders like Shopify and Amazon taking advantage of its benefits.


Nginx core developer quits project in security dispute, starts “freenginx” fork

Comments on Hacker News, including one by a purported employee of F5, suggest Dounin opposed the assigning of published CVEs to bugs in aspects of QUIC. While QUIC is not enabled in the most default Nginx setup, it is included in the application's "mainline" version, which, according to the Nginx documentation, contains "the latest features and bug fixes and is always up to date." ... MZMegaZone confirmed the relationship between security disclosures and Dounin's departure. "All I know is he objected to our decision to assign CVEs, was not happy that we did, and the timing does not appear coincidental," MZMegaZone wrote on Hacker News. He later added, "I don't think having the CVEs should reflect poorly on NGINX or Maxim. I'm sorry he feels the way he does, but I hold no ill will toward him and wish him success, seriously." Dounin, reached by email, pointed to his mailing list responses for clarification. He added, "Essentially, F5 ignored both the project policy and joint developers' position, without any discussion." MegaZone wrote to Ars (noting that he only spoke for himself and not F5), stating, "It's an unfortunate situation, but I think we did the right thing for the users in assigning CVEs and following public disclosure practices. 


Bridging Silos and Overcoming Collaboration Antipatterns in Multidisciplinary Organizations

Some problems with these anti-patterns. I'm going to talk again in threes, I've talked about three anti-patterns, one role across many teams, product versus engineering wars, and X-led. I'm going to talk about some of the problems with these. The first one is one group holds the power. One group holds all the decision-making power, and others can't properly contribute. They aren't given the opportunity to contribute. In our first example, Anita the designer doesn't hold any power because all she's doing is playing catch-up. She's got no time to really contribute to decisions. In the second anti-pattern in the product versus engineering, there's always a battle between who holds the power. It's not collaborative, there's silos between the two. ... Professional protectionism is about people protecting their professional boundaries and not letting other people step into them. It's like, "No, this is my area, you stay over there and you do your thing and I'll do my thing over here." Maybe some people have experienced this. For example, I was working with an organization recently and they said the user research team didn't want to publish how they did user research, because other people might do it.


Scalability Challenges in Microservices Architecture: A DevOps Perspective

Although microservices architectures naturally lend themselves to scalability, challenges remain as systems grow in size and complexity. Efficiently managing how services discover each other and distribute loads becomes complex as the number of microservices increases. Communication across complex systems also introduces a degree of latency, especially with increased traffic, and leads to an increased attack surface, raising security concerns. Microservices architectures also tend to be more expensive to implement than monolithic architectures. Creating secure, robust, and well-performing microservices architectures begins with design. Domain-driven design plays a vital role in developing services that are cohesive, loosely coupled, and aligned with business capabilities. Within a genuinely scalable architecture, every service can be deployed, scaled, and updated autonomously without affecting the others. One essential aspect of effectively managing microservices architecture involves adopting a decentralized governance model, in which each microservice has a dedicated team in charge of making decisions related to the service


CQRS Pattern in C# and Clean Architecture – A Simplified Beginner’s Guide

When implementing Clean Architecture in C#, it’s important to recognize the role each of the four components plays. Entities and Use Cases represent the application’s core business logic, Interface Adapters manage the communication between the Use Cases and Infrastructure components, and Infrastructure represents the outermost layer of the architecture. To implement Clean Architecture successfully, we have some best practices to keep in mind. For instance, Entities and Use Cases should be agnostic to the infrastructure and use plain C# classes, providing a decoupled architecture that avoids excess maintenance. Additionally, applying the SOLID principles ensures that the code is flexible and easily extensible. Lastly, implementing use cases asynchronously can help guarantee better scalability. Each component of Clean Architecture has a specific role to play in the implementation of the overall architecture. Entities represent the business objects, Use Cases implement the business logic, Interface Adapters handle interface translations, and Infrastructure manages the communication to the outside world. 


AI in practice - Celonis’ VP shares how AI can support system & process change

Brown said that, at the moment, Celonis is seeing AI being used to expedite often tedious work, or work that often is prone to human error. Looking back at the adoption of previous general purpose technologies, this makes sense. More often than not the tools that are adopted early on are applied to use cases that take time, don’t add a significant amount of value, and where mistakes are easily made by people. ... Brown also had some thoughts regarding how enterprises should consider their approach to AI adoption, with a focus on not isolating people away from the technology - keeping them close to the change and bringing them along on the journey. Firstly, Brown acknowledged that this is going to be challenging, given the tendency for employees to ‘build empires’ within enterprises and protect them at all costs. She said: I'll go back to a phrase I used for a long, long time and I still use: people don't hurt what they own. So if I'm invested in it, and it's part of what I care about, I'm going to protect it and grow it. If I boil down change management into one sentence, it’s about expectations and accountability. So, what can I expect to be different and what do I need to do differently?


Open Agile Architecture: A Comprehensive Guide for Enterprise Architecture Professionals

Open Agile Architecture equips you with a methodology that seamlessly integrates Agile principles into the realm of enterprise architecture. In today's business environment, change is constant. Open Agile Architecture allows you to respond swiftly and effectively to evolving business needs, technological advancements, and market dynamics. ... Collaboration is at the heart of Agile methodologies, and Open Agile Architecture extends this principle to enterprise architecture. By promoting cross-functional collaboration and open communication, the methodology breaks down silos within the organization. As a practitioner, you'll experience improved collaboration between business and IT teams, fostering a shared understanding of goals and priorities. ... Open Agile Architecture emphasizes an iterative and incremental approach to development. This means that instead of long, rigid planning cycles, you work on delivering incremental value in shorter iterations. This not only ensures continuous progress but also allows you to demonstrate tangible outcomes to stakeholders regularly.


Microsoft Copilot is preparing advances in data protection

As the company has revealed through the Bing blog, Copilot is being prepared to maximize the protection of user and company data that use this system. With this, Microsoft wants to make it clear that the company’s priority is to show that it has no interest in user data while using Copilot services in its 365 versions. It is evident that Copilot is becoming a fundamental piece of Microsoft’s brand strategy, and precisely for that reason they want to distance it from some of the main stigmas that AI currently has. For its part, Copilot is already deeply integrated into various Microsoft services, such as Bing or Teams, where it offers considerable support to the user. One of the concerns that many users have when using Artificial Intelligence is the mere fact of being part of the learning and training process by the AI. As these tools are constantly evolving, many of these systems used the users’ own usage to create variations and advancements in various subjects. However, over time, it has been shown that, in many cases, this has ended up “dumbing down” the AI. However, many users find it quite ironic that an AI, which is trained precisely by collecting data massively illicitly through the Internet, has to actively demonstrate a system that ensures that Copilot will not use user data to continue improving.


Artificial intelligence needs to be trained on culturally diverse datasets to avoid bias

Culture plays a significant role in shaping our communication styles and worldviews. Just like cross-cultural human interactions can lead to miscommunications, users from diverse cultures that are interacting with conversational AI tools may feel misunderstood and experience them as less useful. To be better understood by AI tools, users may adapt their communication styles in a manner similar to how people learned to “Americanize” their foreign accents in order to operate personal assistants like Siri and Alexa. ... AI is already in use as the backbone of various applications that make decisions affecting people’s lives, such as resume filtering, rental applications and social benefits applications. For years, AI researchers have been warning that these models learn not only “good” statistical associations — such as considering experience as a desired property for a job candidate — but also “bad” statistical associations, such as considering women as less qualified for tech positions. As LLMs are increasingly used for automating such processes, one can imagine that the North American bias learned by these models can result in discrimination against people from diverse cultures.



Quote for the day:

''Failure will never overtake me if my de determination to succeed is strong enough.'' -- Og Mandino

Daily Tech Digest - February 16, 2024

GitHub: AI helps developers write safer code, but you need to get the basics right

With cybercriminals largely sticking to the same tactics, it is critical that security starts with the developer. "You can buy tools to prevent and detect vulnerabilities, but the first thing you need to do is help developers ensure they're building secure applications," Hanley said in a video interview with ZDNET. As major software tools, including those that power video-conferencing calls and autonomous cars, are built and their libraries made available on GitHub, if the accounts of people maintaining these applications are not properly secured, malicious hackers can take over these accounts and compromise a library. The damage can be wide-reaching and lead to another third-party breach, such as the likes of SolarWinds and Log4j, he noted. Hanley joined GitHub in 2021, taking on the newly created role of CSO as news of the colossal SolarWinds attack spread. "We still tell people to turn on 2FA...getting the basics is a priority," he said. He pointed to GitHub's efforts to mandate the use of 2FA for all users, which is a process that has been in the works during the last one and a half years and will be completed early this year. 


Why Tomago Aluminium reversed course on its cloud journey

“An ERP solution like ours is massive,” he says, highlighting that this can make it difficult to keep track of everything you are, and not, using. For instance, he says if you’re getting charged $20,000 for electricity, you might want to check your meter and verify that your usage and bill align. “If your electricity meter is locked away and you just get a piece of paper at the end of the month telling you everything’s fine and you owe $20 000, you’re probably going to ask some questions,” he says. Tomago was told everything was secure and running as it should, but they had no way to verify what they were being told was accurate. “We essentially had a swarm of big black boxes,” he says. “We put dollars in and got services out, but couldn’t say to the board, with confidence, that we were really in control of things like compliance, security, and due diligence.” Then in 2020, Tomago moved its ERP system back on-prem — a decision that’s paying dividends. “We now know what our position is from a cyber perspective because we know exactly what our growth rates are, and we know that our systems are up-to-date, and what our cost is because it’s the same every month,” he says.


OpenAI and Microsoft Terminate State-Backed Hacker Accounts

Threat actors linked to Iran and North Korea also used GPT-4, OpenAI said. Nation-state hackers primarily used the chatbot to query open-source information, such as satellite communication protocols, and to translate content into victims' local languages, find coding errors and run basic coding tasks. "The identified OpenAI accounts associated with these actors were terminated," OpenAI said. It conducted the operation in collaboration with Microsoft. Microsoft and OpenAI have not yet observed particularly novel or unique AI-enabled attack or abuse techniques resulting from threat actors' usage of AI," the Redmond, Washington-based technology giant said. Microsoft's relationship with OpenAI is under scrutiny by multiple national antitrust authorities. A British government study published earlier this month concluded that large language models may boost the capabilities of novice hackers but so far are of little use to advanced threat actors. China-affiliated Charcoal Typhoon used ChatGPT to research companies and cybersecurity tools, debug code and generate scripts, and create content likely for use in phishing campaigns. 


Why Most Founders and Investors Are Wrong About Disruption

Recognizing disruption requires an open mind. In many instances, people can't believe or see something is disruptive at first. They think the idea is foolish or won't work. Disruption is usually caused by something that hasn't existed before or something new. Airbnb is a great example here as well. Its founders are said to have gone to every venture capitalist in Silicon Valley and were famously laughed out of meetings. People couldn't see what they saw — it hadn't been invented yet. Even the most seasoned business leaders can misunderstand and mistake disruption or fail to recognize it. Disruption doesn't always mean extinction. History has proven this for countless companies, processes, products, services, and ideas. Organizations can collapse after big changes. They did not or could not adapt. But something new or different tends to fill in the gap. It's often better, and the cycle continues. I have been on both sides of disruption at my company, BriteCo. We are one of the jewelry industry's disruptors – we were the first to move jewelry consumers to 100% paperless processes with technology and the internet. We also provide our customers with different ways to buy our coverage, unique to BriteCo, versus an outdated analog process at the retail point of sale.


Will generative AI kill KYC authentication?

Lee Mallon, the chief technology officer at AI vendor Humanity.run, sees an LLM cybersecurity threat that goes way beyond quickly making false documents. He worries that thieves could use LLMs to create deep back stories for their frauds in case someone at a bank or government level reviews social media posts and websites to see if a person truly exists. “Could social media platforms be getting seeded right now with AI-generated life histories and images, laying the groundwork for elaborate KYC frauds years down the line? A fraudster could feasibly build a ‘credible’ online history, complete with realistic photos and life events, to bypass traditional KYC checks. The data, though artificially generated, would seem perfectly plausible to anyone conducting a cursory social media background check,” Mallon says. “This isn’t a scheme that requires a quick payoff. By slowly drip-feeding artificial data onto social media platforms over a period of years, a fraudster could create a persona that withstands even the most thorough scrutiny. By the time they decide to use this fabricated identity for financial gains, tracking the origins of the fraud becomes an immensely complex task.”


Generative AI: Shaping a New Future for Fraud Prevention

A new category called "AI Risk Decisioning" is poised to transform the landscape of fraud detection. It leverages the strengths of generative AI, combining them with traditional machine learning techniques to create a robust foundation for safeguarding online transactions. ... The first pillar involves creating a comprehensive knowledge fabric that serves as the foundation for the entire platform. This fabric integrates various internal data sources unique to the company, such as transaction records and real-time customer profiles. ... The third pillar of the AI Risk Decisioning approach focuses on automatic recommendations, offering powerful capabilities for real-time and effective risk management. It can automatically monitor transactions and identify trends or anomalies, suggest relevant features for risk models, conduct scenario analyses independently, and recommend the next best action to optimize performance. ... The fourth pillar of the AI Risk Decisioning approach emphasizes human-understandable reasoning. This pillar aims to make every decision, recommendation, or insight provided by the AI system easily understandable to human users.


Implementing a Digital Transformation Strategy

Actionable intelligence has been accepted as the “new normal” of the data-first enterprise. In the data-first enterprise, data and digital technologies not only open up innovative revenue channels but also create the most compliant (governed) business operations. However, in order for an enterprise to successfully plan, develop, and execute a data-first operating model, the business owners and operators have to first develop a digital transformation strategy – connecting the data piles, digital technologies, business processes, and marketing staff. The digital transformation strategy develops around the need to bridge the gaps between the current data-driven goals and processes and intended future business goals and processes. In a nutshell, the digital transformation strategy strikes a harmonious balance between traditional IT and marketing functions. Global businesses have witnessed firsthand the immense benefits of digital processes, such as improved efficiencies, reduced operating costs, and growth of additional revenue channels. A recent industry survey report indicated that 92% of businesses are already pursuing digital transformation in more than one way. However, the transformation across businesses is at various stages of maturity.


Planning a data lake? Prepare for these 7 challenges

Storing data in a central location simplifies compliance in the sense that you know where your data resides, though it also creates compliance challenges. If you store many different types of data in your lake, different assets may be subject to different compliance standards. Data that contains personally identical information (PII), for instance, must be managed differently in some ways than other types of data to comply with laws like DPA, GDPR or HIPAA. While a data lake won’t prevent you from applying granular security controls to different data assets, it doesn't make it easier, either – and it can make it more difficult if your security and compliance tools are not capable of applying different policies to different data assets within a centralized repository. ... Placing your data into a central location to create a data lake is one thing but connecting it to various applications and the workforce who needs access is another. Until you develop the necessary data integrations – and unless you keep them up to date – your data lake will deliver little value. Building data integrations takes time, effort, and expertise and users sometimes underestimate how difficult it is to create successful data integrations. Be sure and prioritize data integration strategy as part of your overall process.


Does Cloud Native Change Developer Productivity and Experience?

When management focuses too much on developer productivity, developer experience can suffer and thus hurt morale and, paradoxically, productivity as well. It’s important for management to have a light touch to avoid this problem, especially with cloud native. Cloud native environments can become so dynamic and noisy that both productivity and developer experience can decline. Management must take special care to support its developers with the right platforms, tools, processes and productivity metrics to facilitate the best outcomes, leveraging platform engineering to create and manage IDPs that facilitate cloud native development despite its inherent complexity. After all, the complexity of cloud native development alone isn’t the problem. Complexity presents challenges to be sure, but developers are always up for a challenge. Complexity coupled with a lack of visibility brings frustration, lowering productivity and DX. With the right observability, for example, with Chronosphere and Google Cloud, developers have a good shot at untangling cloud native’s inherent complexity, delivering quality software on time and on budget, while maintaining both productivity and DX.


Vulnerability to Resilience: Vision for Cloud Security

In the recent era of cloud-native development and DevSecOps, CISOs face the challenge of fostering a security-conscious culture that spans across various cross-functional teams. However, by adopting deliberate, disruptive, engaging, and enjoyable approaches that also provide a return on investment, a sustainable security culture can be achieved. It is essential to instill the concept of shared responsibility for security and focus on enhancing awareness and adhering to advanced security practices. If you don't already have a secure development lifecycle, it is imperative to integrate one immediately. Recognizing and rewarding individuals who prioritize security is one of the ways to encourage a security-focused culture. Additionally, creating a security community and making security more engaging and enjoyable can also help cultivate a sustainable security culture. CISOs should leverage technical tools and best practices to facilitate the seamless integration of security into the Continuous Integration/Continuous Deployment (CI/CD) pipeline. This can be achieved through various measures, such as conducting threat modeling, adopting a shift-left security approach, incorporating IDE security ...



Quote for the day:

"You may have to fight a battle more than once to win it." -- Margaret Thatcher

Daily Tech Digest - February 15, 2024

CISO and CIO Convergence: Ready or Not, Here It Comes

While CIOs are still responsible for setting and meeting technology goals and for staying on budget, their primary mandate is determining how the company can harness technology to innovate, and then procure and manage those resources. While plenty of companies still maintain large, on-premise IT estate, it's just a matter of time before they digitally transform. Either way, the CIO role has become markedly less operational over time. On the other hand, the profile of CISOs has been growing since the early 2000s, set against a non-stop carousel of compliance mandates, data breaches, and emerging cybersecurity threats. While data breaches may have forced businesses to pay attention to security, it was compliance mandates that funded it. From HIPAA and PCI DSS to GDPR, SOC 2, and more, compliance has been a double-edged sword for CISOs. Compliance increased the role of cybersecurity teams and made them more visible across IT and the business as a whole, providing CISOs with bigger budgets and increased latitude on how to spend it. However, all the effort they put into compliance did little to stymie phishing, ransomware, big breaches, and/or malicious insiders. 


Will Generative AI Kill DevSecOps?

Beyond having automation and guardrails in place, you also need security policies at the company level, Moisset said, to make sure that DevSecOps understands all the generative AI tools colleagues are using. Then you can educate them on how to use it, like creating and communicating a generative AI policy. Because a total ban on GenAI just won’t fly. When Italy temporarily banned ChatGPT, Foxwell said there was a visible decrease in productivity across the country’s GitHub organizations, but, when it was reinstated, “what also picked up was the usage of tools that circumvented all of the government policies and firewalls around the prevention of using these” tools. Engineers always find a way. Particularly when using generative AI for customer service chatbots, Moisset said, you need guardrails in place around both the inputs and outputs, as malicious actors can potentially “socialize” the chatbot via prompt injection to give a desired result — like when someone was able to buy a Chevy for $1 from a chatbot. “It’s back to educating the users and developers that it’s good to use AI, we should be using AI, but we need to actually put guardrails around it,” she said, which also demands an understanding of how your customers interact with GenAI.


Combining heat and compute

Data centers offer a predictable supply of heat because they keep their servers running continuously. But the heat is “low-grade:” It is warm rather than hot, and it comes in the form of air, which is difficult to transport. So, most data centers vent their heat to the atmosphere. Sometimes, there are district heat networks, which provide warmth to local homes and businesses through a piped network. If your data center is near one of these, it is a matter of extending it to connect to the data center, and boosting the grade of heat. But you have to be in the right place to connect to one. “There are certain countries that have established or developing heat networks, but the majority don't have a heat network per se, so it's going on a piecemeal basis,” Neal Kalita, senior director of power and energy at NTT, tells DCD. You are unlikely to find one in the US, says Rolf Brink of cooling consultancy Promersion: “The United States is a fundamentally different ecosystem. But Europe is a lot more dense in terms of population, and there is more heat demand.” The Nordic countries have a lot of heat networks. Stockholm Data Parks is a well-known example - a data center campus in urban Stockholm, where every data center has a connection to the district heating network and gets paid for its heat.


Harmonizing human potential and AI: The evolution of work in the digital era

The evolving landscape of work is witnessing a profound transformation as the fusion of human potential with AI takes center stage. Concerns about the ethical implications of AI are well-known, including the potential for perpetuating bias and discrimination and its impact on employment and job security. Ensuring that AI is developed and deployed ethically and responsibly is crucial, taking into account fairness, transparency and accountability. ... Optimizing human-centric capabilities with automation and an AI-first mindset is significant for long-term success. Consider a telecoms operator with employees struggling to grapple with the labor-intensive process of manually reviewing a high volume of mobile tower lease contracts. By embracing an AI-powered platform equipped with capabilities for faster and more accurate extraction of contract clauses, employees were able to shift their focus toward leveraging hidden risks identified by the platform. This enabled the renegotiation of existing contracts, leading to millions of dollars in savings. It’s no coincidence that the enterprises that are more inclined to augment human potential are those resilient enough to maximize the value of AI-led transformations. 


5 Wi-Fi vulnerabilities you need to know about

Like wired networks, Wi-Fi is susceptible to Denial of Service (DoS) attacks, which can overwhelm a Wi-Fi network with excessive amount of traffic. This can cause the Wi-Fi to become slow or unavailable, disrupting normal operations of the network, or even the business. A DoS attack can be launched by generating a large number of connection or authentication requests, or injecting the network with other bogus data to break the Wi-Fi. ... Wi-jacking occurs when a Wi-Fi-connected device has been accessed or taken over by an attacker. The attacker could retrieve saved Wi-Fi passwords or network authentication credentials on the computer or device. Then they could also install malware, spyware, or other software on the device. They could also manipulate the device’s settings, including the Wi-Fi configuration, to make the device connect to rogue APs. ... RF interference can cause Wi-Fi disruptions. Instead of being caused by bad actors, RF interference could be triggered by poor network design, building changes, or other electronics emitting or leaking into the RF space. Interference can result in degraded performance, reduced throughput, and increased latency.


AI outsourcing: A strategic guide to managing third-party risks

Bias may persist in many face detection systems. Naturally, this misidentification could have severe consequences for the parties involved. Diverse training data and transparent algorithms are necessary to mitigate the risk of discriminatory outcomes. Furthermore, complex AI models often encounter the “black box” problem or how some AI models arrive at their decisions. Teaming with a third-party AI service requires human oversight to navigate the threat of biased algorithms. ... Most of us can admit that the risk of becoming overly reliant on AI is significant. AI can quickly become a go-to solution for many challenges. It’s no surprise that companies face a similar risk, becoming too dependent on a single vendor’s AI solutions. However, this approach can become problematic. Companies can “get stuck,” and switching providers seems almost impossible. ... Quality and reliability concerns are top-of-mind for most company leaders partnering with third-party AI services. Some primary concerns include service outages, performance issues, and unexpected disruptions. Operational resilience is necessary, and contingency plans are a significant piece of the resiliency puzzle, given the damage business downtime can cause. 


Practices for Implementing an Effective Data Governance Strategy

Ensuring the integrity and usability of data within an organization requires implementing clear data quality standards and metrics. These standards serve as a benchmark for data quality, guiding data management practices and ensuring that data is accurate, complete, and reliable. Organizations can streamline their data governance processes by defining what constitutes quality data, making it easier to identify and rectify data issues. This approach enhances data quality, supports compliance with regulatory requirements, and improves decision-making capabilities. Developing a comprehensive set of data quality metrics is crucial for monitoring and maintaining high data standards. These metrics should be aligned with the organization’s strategic objectives and include criteria such as accuracy, completeness, consistency, timeliness, and uniqueness. ... Creating an environment where data stewardship and accountability are at the forefront requires strategic planning and commitment from all levels of an organization. It is essential to embed data governance principles into the corporate culture, ensuring that every team member understands their role in maintaining data integrity and security.


What is the impact of AI on storage and compliance?

Right now, when you look at traditional storage, generally speaking you look at your environment, your ecosystem, your data, classifying that data, and putting a value on it. And, depending on that value and the potential impact, you put in the right security and assign the length of time you need to keep the data and how you keep it, delete it. But, if you look at a CRM [customer relationship management service], if you put the wrong data in then the wrong data comes out, and it’s one set of data. So, to be blunt, garbage in, garbage out. With AI, it’s much more complex than that, so you may have garbage in, but instead of one dataset out that might be garbage, there might be a lot of different datasets and they may or may not be accurate. If you look at ChatGPT, it’s a little bit like a narcissist. It’s never wrong and if you give it some information and then it spits out the wrong information and then you say, “No, that’s not accurate”, it will tell you that’s because you didn’t give it the right dataset. And then at some stage it will stop talking to you, because it will have used up all its capability to argue with you, so to speak. From a compliance perspective, if you are using AI – a complicated AI or a simple AI like ChatGPT – to create a marketing document, that’s OK.


How to Get Your Failing Data Governance Initiatives Back on Track

Data governance is a big lift. Organizations might make the mistake of attempting to roll the initiative out across the entire enterprise without building in the steps to get there. “If you make it too broad and end up not focusing on short-term goals that you can demonstrate to keep the funding going, these engagements [tend] to fail,” says Prasad. Organizational issues are some of the major stumbling blocks standing in the way of successful data governance, but there can also be technical obstacles. Reiter points to the importance of leveraging automation. If an enterprise team attempts to manually undertake data governance mapping, it could be irrelevant by the time it is completed. ... Documentation, or lack thereof, can be a good indicator of a data governance initiatives' progress and sustainability. “As things are changing over time and documentation isn’t updated, that's a great sign that governance is not maintainable,” Holiat says. Getting feedback from end users can alert data governance leaders to issues standing in the way of adoption. Are people throughout the organization frustrated with the data governance program? Does it facilitate their access to data, or is it making their jobs more difficult?


Adopting AI with Eyes Wide Open

For businesses in general, AI can increase efficiency, make the workplace safer, improve customer service, create competitive advantage and lead to new business models and revenue streams. But like any technological innovation, AI has its risks and challenges. At the heart of AI is code and data; code can (and often does) contain bugs, and data can (and often does) contain anomalies. But that is no different to the technological innovations that we have embraced to-date. Arguably, the risks and challenges of AI are greater – not least of all because of the potential breadth of its application – and they include (but are certainly not limited to): overreliance, lack of transparency, ethical concerns, security, and regulatory and statutory challenges which typically lag behind the pace of progress. So, what does have this to do with strategy and architecture, and in particular digital transformation? Too often in organizations, new technologies are rushed in, in the belief that there is no time to lose. Before you know it, the funds and resources have been found to embark on an initiative (programme or project) to adopt it, spearheading the way to the future. It is the future! 



Quote for the day:

"I find that the harder I work, the more luck I seem to have." -- Thomas Jefferson

Daily Tech Digest - February 14, 2024

How AI is strengthening XDR to consolidate tech stacks

XDR platforms need AI/ML technologies to identify malware-free breach attempts while also looking for signals of attackers relying on legitimate system tools and living-off-the-land (LOTL) techniques to breach endpoints undetected. ... VentureBeat spoke with several CEOs at RSAC 2023 to learn how each perceives the value of AI in their product strategies today and in the future. Connie Stack, CEO of NextDLP, told VentureBeat, “AI and machine learning can significantly enhance data loss prevention by adding intelligence and automation to detecting and preventing data loss. AI and machine learning algorithms can analyze patterns in data and detect anomalies that may indicate a security breach or unauthorized access to sensitive information well before any policy violation occurs.” XDR providers tell VentureBeat that the challenge of parsing an exponential increase in telemetry data, performing telemetry enrichment and mapping data to schema are the immediate architectural requirements they have. There’s also the need for real-time cross-collaboration, analytics and alert prioritization. XDR’s current and future ecosystem is dependent on AI’s continued growth.


10 ways generative AI will transform software development

The ability to prompt for code adds risks if the code generated has security issues, defects, or introduces performance issues. The hope is that if coding is easier and faster, developers will have more time, responsibility, and better tools for validating the code before it gets embedded in applications. But will that happen? “As developers adopt AI for productivity benefits, there’s a required responsibility to gut-check what it produces,” says Peter McKee, head of developer relations at Sonar. “Clean as you code ensures that by performing checks and continuous monitoring during the delivery process, developers can spend more time on new tasks rather than remediating bugs in human-created or AI-generated code.” CIOs and CISOs will expect developers to perform more code validation, especially if AI-generated code introduces significant vulnerabilities. ... Another implication of code developed with genAI concerns how enterprise leaders develop policies and monitor the supply chain of what code is embedded in enterprise applications. Until now, organizations were most concerned about tracking open source and commercial software components, but genAI adds new dimensions.


Agile Methodologies In The Era Of Machine Learning Development

Both emphasize adaptability and continuous improvement, providing a solid foundation for building robust ML models. The iterative cycles of Agile resonate with the constant refinement required in ML algorithms, fostering an environment conducive to experimentation and learning. Bringing together Agile and Machine Learning (ML) is like mixing the best of teamwork and smart strategies for computer programs. Agile is like a way of working that’s flexible and can adapt quickly, and ML is all about smart machines learning from data. When they come together, it’s like using a super-smart and flexible approach to make really cool and smart computer programs ... Everyone has a special skill, like some friends are good at building, and others are good at deciding what the robot dog should do.This teamwork also helps if you discover something new, like a better way for the robot dog to move. Agile allows you to quickly change and improve, just like trying a new game. ... Unlike traditional software, ML projects grapple with inherent uncertainties in data and model outcomes, requiring a more adaptive approach. Navigating these uncertainties is paramount when incorporating Agile principles.


The AI data-poisoning cat-and-mouse game — this time, IT will win

The offensive technique works in one of two ways. One, it tries to target a specific company by making educated guesses about the kind of sites and material they would want to train their LLMs with. The attackers then target, not that specific company, but the many places where it is likely to go for training. If the target is, let’s say Nike or Adidas, the attackers might try and poison the databases at various university sports departments with high-profile sports teams. If the target were Citi or Chase, the bad guys might target databases at key Federal Reserve sites. The problem is that both ends of that attack plan could easily be thwarted. The university sites might detect and block the manipulation efforts. To make the attack work, the inserted data would likely have to include malware executables, which are relatively easy to detect. Even if the bad actors’ goal was to simply feed incorrect data into the target systems — which would, in theory, make their analysis flawed — most LLM training absorbs such a massively large number of datasets that the attack is unlikely to work well.


What Is API Sprawl and Why Is It Important?

Inconsistencies between APIs can stunt the developer experience around integration. For example, many different design paradigms are used in modern API development, including SOAP, REST, gRPC and more asynchronous formats like webhooks or Kafka streams. An organization might adopt various styles simultaneously. Using various API styles provides best-of-breed options for the task at hand. That said, style inconsistencies can make it challenging for a single developer to navigate disparate components without guidance. ... As cybersecurity experts often say, you can’t secure what you don’t know. Amid technology sprawl, you likely won’t be aware of the hundreds, if not thousands, of APIs being developed and consumed daily. Without inventory management, APIs can slip under the rug and rot. API sprawl can also lead to insecure coding practices. Security researchers at Escape recently found 18,000 high-risk API-related secrets and tokens after performing a scan of the web. ... Life cycle management can also suffer with sprawl. If API versioning and retirement schedules aren’t communicated effectively, it can easily lead to breaking changes on the client side. 


Rise in cyberwarfare tactics fueled by geopolitical tensions

There are a number of ways in which public-private partnerships can be effective in addressing cybersecurity threats. First, governments and private companies can share information about cyber threats and vulnerabilities. This can help to improve the overall security posture of both the public and private sectors. Second, governments and private companies can develop joint cybersecurity initiatives. These initiatives can focus on a variety of areas, such as developing new security technologies, improving incident response capabilities, or providing cybersecurity training to employees. Third, governments and private companies can collaborate on research and development efforts. This can help to identify new cybersecurity threats and develop new ways to protect against them. Caveat, when talking about public-private partnerships – what is needed is real operational and ongoing public-private collaboration is essential for sharing information, developing best practices, and mitigating risks and is essential for building a more secure and resilient cyber ecosystem. 


New media could bring fresh competition to tape archive market

Glass is becoming another alternative to tape. Microsoft's Project Silica uses femtosecond lasers to write data to quartz glass and "polarization-sensitive microscopy using regular light to read," according to Microsoft. Another company, Cerabyte, uses lasers to etch patterns into ceramic nanocoatings on glass. Ceramic is resistant to heat, moisture, corrosion, UV light, radiation and electromagnetic pulse blasts. Ceramic also has another advantage over tape: Its high durability leads to fewer refresh cycles, according to Martin Kunze, chief marketing officer and co-founder of Cerabyte, a startup headquartered in Munich. "Tape has limited durability and needs to be either refreshed or all migrated onto new formats," Kunze said. This undertaking is expensive and time-consuming, he said. Kunze added that tape is vulnerable to vertical market failure. Western Digital is the only company manufacturing the reading and writing heads for tape. "Assume there is a decision on the board: 'We don't [want to] run this company anymore because it doesn't bring in as much revenue,'" he said. The single point of failure could leave enterprises in the lurch. He sees another problem with tape -- it's stodgy.


Apache Pekko: Simplifying Concurrent Development With the Actor Model

In the actor model, actors communicate by sending messages to each other, without transferring the thread of execution. This non-blocking communication enables actors to accomplish more in the same amount of time compared to traditional method calls. Actors behave similarly to objects in that they react to messages and return execution when they finish processing the current message. Upon reception of a message an Actor can do the following three fundamental actions: send a finite number of messages to Actors it knows; create a finite number of new Actors; and designate the behavior to be applied to the next message. ... Pekko is designed as a modular application and encompasses different modules to provide extensibility. The main components are: Pekko Persistence enables actors to persist events for recovery on failure or during migration within a cluster that provides abstractions for developing event-sourced applications; Pekko Streams module provides a solution for stream processing, incorporating back-pressure handling seamlessly and ensuring interoperability with other Reactive Streams implementations...


How Can Synthetic Data Impact Data Privacy in the New World of AI

Data from the real world is often inherently biased. This is because the data used to train models is largely gathered from across the internet, reflecting biases present in society and the socio-economic groups prevalent in the social media spaces used to gather this data. Data scientists have turned to synthetic data and ‘Digital Humans’ to combat these biases. With Digital Humans, data scientists can vary elements of ‘Digital DNA,’ such as et,’ city, size, and clothing, and mix with real-world data to create more representative and diverse datasets. Of course, this also protects image rights and PII exposure that could come from using images and footage of people in the real world. Mindtech worked with a construction company that wanted to develop autonomous site vehicles. The company wanted to enhance these vehicles’ safety and accrue a broader range of data to train them. As a result, it used synthetic data to create diverse synthetic datasets to train these vehicles to identify various people on site, no matter size/shape/sex/ethnicity/clothing/ – the vehicles could stop their journey if someone were blocking their way.


The Great Superapp Dilemma: Business Ambitions vs User Privacy

If we put privacy aside for a moment, the benefits of a possible superapp cannot be denied. We could say goodbye to the hundreds of online accounts that operate as an isolated silo managed by unrelated services and domains and the chore of updating account details across them all, one by one. And, as well as promising a much simpler user experience through a single application, it would unlock new convenient services using a broader set of data, and allow for increased innovation that adds value for users – such as unified health metrics, consolidated banking services, cohesive government-related accounts, integrated social networks, or unified marketplaces. However, managing vast volumes of accessible data – which has grown excessively since the era of big data, and will no doubt continue with the advent of AI – is operationally challenging to say the least. ... With these concerns in mind, companies working on superapp development must address issues including managing and recovering from identity theft, securing data against breaches, and ensuring that data access aligns with the user’s consented sharing policy.



Quote for the day:

''Effective questioning brings insight, which fuels curiosity, which cultivates wisdom.'' -- Chip Bell

Daily Tech Digest - February 13, 2024

Advanced Microsegmentation Strategies for IT Leaders

Microsegmentation, and network segmentation in general, is a 50-year-old cybersecurity strategy that “involves dividing a network into smaller zones to enhance security by restricting the movement of a threat to an isolated segment rather than to the whole network,” says Guy Pearce, a member of the ISACA Emerging Trends Working Group. ... Moyle says that any segmentation (micro or otherwise) can be “part of a security strategy based on use case, architecture and other factors.” He notes that microsegmentation itself isn’t an end goal for security, and that IT leaders should instead see it as “a mechanism that’s part of a broader holistic strategy.” That said, many factors go into a successful microsegmentation implementation, namely careful planning. Microsegmentation goes hand in hand with setting up granular security policies. It also relies on continuous monitoring, evaluation and user education awareness, Pearce says. Successful microsegmentation also requires automation, incident response orchestration and cross-team collaboration. None of that is sustainable without a solid, well-maintained network architecture map. 


Could DC win the new data center War of the Currents?

Fundamentally, electronics use DC power. The chips and circuit boards are all powered by direct current, and every computer or other piece of IT equipment that is plugged into the AC mains has to have a “power supply unit” (PSU), also known as a rectifier or switched mode power supply (SMPS) inside the box, turning the power from AC to DC. ... Data centers have an Uninterruptible Power Supply (UPS) designed to power the facility for long enough for generators to fire up. The UPS has to have a large store of batteries, and they are powered by DC. So power enters the data center as AC, is converted to DC to charge the batteries, and then back to AC for distribution to the racks. ... Data centers are now looking at using microgrids for power. That means drawing on-site energy directly from sources such as fuel cells and solar panels. As it turns out, those sources often conveniently produce direct current. A data center could be isolated from the AC grid, and live on its own microgrid. On that grid DC power sources charge batteries, and power electronics which fundamentally run on DC. In that situation, the idea of switching to AC for a short loop around the facility begins to look, well, odd.


5 key metrics for IT success

When merged, speed, quality, and value metrics are essential for any organization undergoing transformation and looking to move away from traditional project management approaches, says Sheldon Monteiro, chief product officer at digital consulting firm Publicis Sapient. “This metric isn’t limited to a specific role or level within an IT organization,” he explains. “It’s relevant for everyone involved in the product development process.” Speed, quality, and value metrics represent a shift from traditional project management metrics focused on time, scope, and cost. “Speed ensures the ability to respond swiftly to change, quality guarantees that changes are made without compromising the integrity of systems, and value ensures that the changes contribute meaningfully to both customers and the business,” Monteiro says. “This holistic approach aligns IT practices with the demands of a continuously evolving landscape.” Focusing on speed, quality, and value provides a more nuanced understanding of an organization’s adaptability and effectiveness. “Focusing on speed, quality, and value provides insights into an organization’s ability to adapt to continuous change,” Monteiro says. 


The future of cybersecurity: Anticipating changes with data analytics and automation

In recent years, cybersecurity threats have undergone a notable evolution, marked by the subtler tactics of mature threat actors who now leave fewer artifacts for analysis. The old metaphor ‘looking for a needle in a haystack’ (to describe the detection of malicious activity) is now more akin to ‘looking for a needle in a stack of needles.’ This shift necessitates the establishment of additional context around suspicious events to effectively differentiate legitimate from illegitimate activities. Automation emerges as a pivotal element in providing this contextual enrichment, ensuring that analysts can discern relevant circumstances amid the rapid and expansive landscape of modern enterprises. The landscape of cyber threats continues to further evolve, and recent high-profile data breaches underscore the gravity of the shift. In response to these challenges, data analytics and automation play a crucial role in detecting lateral movement, privilege escalation, and exfiltration, particularly when threat actors exploit zero-day vulnerabilities to gain entry into an environment.


Significance of protecting enterprise data

In a world where data fuels innovation and growth, protecting enterprise data is not optional; it’s essential. The digital age has ushered in a complex threat landscape, necessitating a multifaceted approach to data protection. From next-gen SOCs and application security to IAM, data privacy, and collaboration with SaaS providers, every aspect plays a vital role. As traditional security tools and firewalls are no longer sufficient to detect and respond to modern threats, next-generation security operations centres (SOCs) can play a proactive role by leveraging technologies like AI, machine learning, and user behavior analytics. They can analyse huge volumes of data in real-time to detect even the most well-hidden attacks. Early detection and quick response are crucial to minimise damage from security incidents. Next-gen SOCs play a pivotal role in safeguarding enterprises by enhancing visibility, shortening response times, and reducing security risks. Protecting applications is equally important, as in the digital age, applications are the conduit through which data flows. Many successful breaches target exploitable vulnerabilities residing in the application layer, indicating the need for enterprise IT departments to be extra vigilant about application security. 


A changing world requires CISOs to rethink cyber preparedness

A cybersecurity posture that is societally conscious equally requires adopting certain underlying assumptions and taking preparatory actions. Foremost among these is the recognition that neutrality and complacency are anathema to one another in the context of digital threats stemming from geopolitical tension. As I recently wrote, the inherent complexity and significance of norm politicking in international affairs leads to risk that impacts cybersecurity stakeholders in nonlinear fashion. Recent conflicts support the idea that civilian hacking around major geopolitical fault lines, for instance, operates on divergent logics of operations depending on the phase of conflict that is underway. The result of such conditions should not be a reluctance to make statements or take actions that avoid geopolitical relevance. Rather, cybersecurity stakeholders should clearly and actively attempt to delineate the way geopolitical threats and developments reflect the security objectives of the organization and its constituent community. They should do so in a way that is visible to that community. 


AI-powered 6G wireless promises big changes

According to Will Townsend, an analyst at Moor Insights & Strategy, things are accelerating more quickly with 6G than 5G did at the same point in its evolution. And speaking of speeds, that will also be one of the biggest and most transformative improvements of 6G over 5G, due to the shift of 6G into the terahertz spectrum range, Townsend says. “This will present challenges because it’s such a high spectrum,” he says. “But you can do some pretty incredible things with instantaneous connectivity. With terahertz, you’re going to get near-instantaneous latency, no lag, no jitter. You’re going to be able to do some sensory-type applications.” ... The new 6G spectrum also brings another benefit – an ability to better sense the environment, says Spirent’s Douglas. “The radio signal can be used as a sensing mechanism, like how sonar is used in submarines,” he says. That can allow use cases that need three-dimensional visibility and complete visualization of the surrounding environment. “You could map out the environment – the shops, buildings, everything – and create a holistic understanding of the surroundings and use that to build new types of services for the market,” Douglas says. 


What distinguishes data governance from information governance?

Data governance is primarily concerned with the proper management of data as a strategic asset within an organization. It emphasizes the accuracy, accessibility, security, and consistency of data to ensure that it can be effectively used for decision-making and operations. On the other hand, information governance encompasses a broader spectrum, dealing with all forms of information, not just data. It includes the management of data privacy, security, and compliance, as well as the handling of business processes related to both digital and physical information. ... Implementing data governance ensures that an organization's data is accurate, accessible, and secure, which is vital for operational decision-making and strategic planning. This governance type establishes the necessary protocols and standards for data quality and usage. Information governance, by managing all forms of information, helps organizations comply with legal and regulatory requirements, reduce risks, and enhance business efficiency and effectiveness. It also addresses the management of redundant, outdated, and trivial information, which can lead to cost savings and improved organizational performance.


The Future Is AI, but AI Has a Software Delivery Problem

As more developers become comfortable building AI-powered software, Act Three will trigger a new race: the ability to build, deploy and manage AI-powered software at scale, which requires continuous monitoring and validation at unprecedented levels. This is why crucial DevOps practices for delivering software at scale, like continuous integration and continuous delivery (CI/CD), will play a central role in providing a robust framework for engineering leaders to navigate the complexities of delivering AI-powered software — therefore turning these technological challenges into opportunities for innovation and competitive advantage. Just as software teams have honed practices for getting reliable, observable, available applications safely and quickly into customers’ hands at scale, AI-powered software is yet again evolving these methods. We’re experiencing a paradigm shift from the deterministic outcomes we’ve built software development practices around to a world with probabilistic outcomes. This complexity throws a wrench in the conventional yes-or-no logic that has been foundational to how we’ve tested software, requiring developers to navigate a variety of subjective outcomes.


Generative AI – Examining the Risks and Mitigations

In working with AI, we should be helping executives in the companies we are working with to understand these risks and also the potential applications and innovations that can come from Generative AI. That is why it is essential that we take a moment now to develop a strategy for dealing with Generative AI. By developing a strategy, you will be well positioned to reap the benefits from the capabilities, and will be giving your organization a head-start in managing the risks. When looking at the risks, companies can feel overwhelmed or decide that it represents more trouble than they are willing to accept and may take the stance of banning GenAI. Banning GenAI is not the answer, and will only lead to a bypassing of controls and more shadow IT. So, in the end, they will use the technology but won’t tell you. ... AI risks can be broadly categorized into three types: Technical, Ethical, and Social. Technical risks refer to the potential failures or errors of AI systems, such as bugs, hacking, or adversarial attacks. Ethical risks refer to the moral dilemmas or conflicts that arise from the use or misuse of AI, such as bias, discrimination, or privacy violations. Social risks refer to the impacts of AI on human society and culture, such as unemployment, inequality, or social unrest.



Quote for the day:

"In the end, it is important to remember that we cannot become what we need to be by remaining what we are." -- Max De Pree