Daily Tech Digest - August 01, 2023

Is generative AI mightier than the law?

The FTC hasn’t been shy in going after Big Tech. And in the middle of July, it took its most important step yet: It opened an investigation into whether Microsoft-backed OpenAI has violated consumer protection laws and harmed consumers by illegally collecting data, violating consumer privacy and publishing false information about people. In a 20-page letter sent by the FTC to OpenAI, the agency said it’s probing whether the company “engaged in unfair or deceptive privacy or data security practices or engaged in unfair or deceptive practices relating to risks of harm to consumers.” The letter made clear how seriously the FTC takes the investigation. It wants vast amounts of information, including technical details about how ChatGPT gathers data, how the data is used and stored, the use of APIs and plugins, and information about how OpenAI trains, builds, and monitors the Large Language Models (LLMs) that fuel its chatbot. None of this should be a surprise to Microsoft or ChatGPT. In May, FTC Chair Lina Khan wrote an opinion piece in The New York Times laying out how she believed AI must be regulated.


Four Pillars of Digital Transformation

The four principle was understanding your customers and your customer segments. that's number one. Second is aligning with your customers and your functional teams. Because you cannot do anything digital transformation in a silo. You can say, "Oh, Asif and Shane wants to digital transform this company and forget about what people A and people B are thinking. Shane and I are going to go and make that happen." We will fail. Not going to happen. That's where the cross-functional team alignment comes in. The third is influencing and understanding what the change is, why we want to do it, how we are going to do it. And what's in it, not for Shane, not for Asif. What's in it for you as a customer or as an organization? Again, showing the empathy and explaining the why behind it. And finally, communicating, communicating, communicating, over-communicating and celebrating success. To me, those are the big four pillars that we use to sell the idea of what the digital transformation is to at any level from a C-level, all the way to people at the store level. Can I explain them what's in it for them?


Scientists Seek Government Database to Track Harm from Rising 'AI Incidents'

Faced with mounting evidence of such harmful AI incidents, the FAS noted the government database could somewhat align with other trackers and efforts. "The database should be designed to encourage voluntary reporting from AI developers, operators, and users while ensuring the confidentiality of sensitive information," the FAS said. "Furthermore, the database should include a mechanism for sharing anonymized or aggregated data with AI developers, researchers, and policymakers to help them better understand and mitigate AI-related risks. The DHS could build on the efforts of other privately collected databases of AI incidents, including the AI Incident Database created by the Partnership on AI and the Center for Security and Emerging Technologies. This database could also take inspiration from other incident databases maintained by federal agencies, including the National Transportation Safety Board's database on aviation accidents." The group further recommended that the DHS should collaborate with the NIST to design and maintain the database, including setting up protocols for data validation categorization, anonymization, and dissemination.


Generative AI: Headwind or tailwind?

It's topping everyone's wish list, with influences converging from various directions: customers, staff members and corporate boards, all applying pressure to harness its potential in their respective markets. On the bright side, there's a unified objective: to make progress. The challenge, however, is that, like most early-stage technologies, the path forward with generative AI isn't as straightforward — there's a lot of ambiguity about what to do, how to do it or even where to start. The potential of generative AI surpasses mere cost-effectiveness and efficiency. It can fuel the generation of new ideas, fine-tune designs and facilitate the launch of new products. It could serve as your catalyst for innovation if you're bold enough to step into this new frontier. But where do you step first? Our approach is first to identify a problem or "missing." In the simplest explanation possible, envision a Venn diagram where one circle represents the new tech wave (generative AI) and the other represents your customer, their challenges, opportunities, tasks, pains and gains.


Hackers: We won’t let artificial intelligence get the better of us

Hackers who have adopted or who plan to adopt generative AI are most inclined to use Open AI’s ChatGPT ... Those that have taken the plunge are using generative AI technology in a wide variety of ways, with the most commonly used functions being text summarisation or generation, code generation, search enhancement, chatbots, image generation, data design, collection or summarisation, and machine learning. Within security research workflows specifically, hackers said they found generative AI most useful to automate tasks, analyse data, and identify and validate vulnerabilities. Less widely used applications included conducting reconnaissance, categorising threats, detecting anomalies, prioritising risk and building training models. Many hackers who are not native English speakers or not fluent in English are also using services such as ChatGPT to translate or write reports and bug submissions, and fuel more collaboration across national borders.


Your CDO Does More Than Just Protect Data

Influential CDOs who can collaborate without being perceived as aloof or arrogant stand out in the field. Balancing visionary thinking with practical implementation strategies is vital, and CDOs who instill purpose and forward-looking excitement within their teams create a culture of innovation and continuous improvement. These qualities are essential for unlocking the full potential of data leadership. ... With boards needing more depth of tech knowledge to oversee strategy, CDOs can be valuable directors. CDOs who can demonstrate experience in making data core to the company’s strategy or informing a transformational pivot for the business would bring a high amount of value to boardroom discussions. The opportunity to understand and see risks and opportunities through a board member’s eyes is an invaluable experience for a CDO, which not only helps the CDO to prepare for future board service but also gives your board members additional education about the future of data and what it can bring to your organization.


Data Warehouse Telemetry: Measuring the Health of Your Systems

At the heart of the data warehouse system, there is a pulse. This is the set of measures that indicate the system's performance -- its heartbeat, so to speak. This includes the measurements of system resources, such as disk reads and writes, CPU and memory utilization, and disk usage. These metrics are an indicator of how well the overall system is performing. It is important to measure to make sure that these metrics do not go too low or too high. When they go too low, it is an indicator that the system has been oversized and resources are being wasted. When they go too high, it is an indicator that the system is undersized and resources are nearing exhaustion. As the resources hit a critical level, overall performance can grind to a halt, freezing processes and negatively impacting the user experience. When a medical practitioner sees that a patient’s heart rate/pulse is too fast or too slow, they will provide several recommendations, including ongoing monitoring to see if the situation improves or changes to diet or exercise.


Reducing Generative AI Hallucinations and Trusting Your Data

Data has two dimensions. One is the actual value of the data and the parameter that it represents; for example, the temperature of an asset in a factory. Then, there is also the relational aspect of the data that shows how the source of that temperature sensor is connected to the rest of the other data generators. This value-oriented aspect of data and the relational aspect of that data are both important for quality, trustworthiness, and the history and revision and versioning of the data. There’s obviously the communication pipeline, and you need to make sure that where the data sources connect to your data platform has enough sense of reliability and security. Make sure the data travels with integrity and the data is protected against malicious intent. ... Generative AI is one of those foundational technologies like how software changed the world. Mark [Andreesen, a partner in the Silicon Valley venture capital firm Andreessen Horowitz] in 2011 said that software is eating the world, and software already ate the world. It took 40 years for software to do this. 


10 Reasons for Optimism in Cybersecurity

The new National Cybersecurity Strategy announced by the Biden Administration this year emphasizes the importance public-private collaboration. Google Cloud’s Venables anticipates that knowledge sharing between the public and private sectors will help enhance transparency around cyber threats and improve protection. “As public and private sector collaboration grows, in the next few years we’ll see deeper coordination between agencies and big tech organizations in how they implement cyber protections,” he says. The public and private sectors also have the opportunity to join forces on cybersecurity regulation. ... As the cybersecurity product market matures it will not only embrace secure-by-design and -default principles. XYPRO’s Tcherchian is also optimistic about the consolidation of cybersecurity solutions. “Cybersecurity consolidation integrates multiple cybersecurity tools and solutions into a unified platform, addressing the crowded and complex nature of the cybersecurity market,” he explains. 


Keeping the cloud secure with a mindset shift

Organizations developing software through cloud-based tools and environments must take additional care to adapt their processes. Adapting a “shift-left” approach for the continuous integration and continuous deployment CI/CD pipeline is particularly important. Traditionally, security checks were often performed towards the end of the development cycle. However, this reactive approach can allow vulnerabilities to slip through the cracks and reach production stages. The shift-left approach advocates for integrating security measures earlier in the development cycle. By doing so, potential security risks can be identified and mitigated early, preventing malware infiltration and reducing the cost and complexity of addressing security issues at later stages. This proactive approach aligns with the dynamic nature of cloud environments, ensuring robust security without hindering agility and innovation. Businesses should consider how they can mirror the shift-left ethos across their other cloud operations.



Quote for the day:

"Leadership offers an opportunity to make a difference in someone's life, no matter what the project." -- Bill Owens

Daily Tech Digest - July 31, 2023

The open source licensing war is over

Too many open source warriors think that the license is the end, rather than just a means to grant largely unfettered access to the code. They continue to fret about licensing when developers mostly care about use, just as they always have. Keep in mind that more than anything else, open source expands access to quality software without involving the purchasing or (usually) legal teams. This is very similar to what cloud did for hardware. The point was never the license. It was always about access. Back when I worked at AWS, we surveyed developers to ask what they most valued in open source leadership. You might think that contributing code to well-known open source projects would rank first, but it didn’t. Not even second or third. Instead, the No. 1 criterion developers used to judge a cloud provider’s open source leadership was that it “makes it easy to deploy my preferred open source software in the cloud.” ... One of the things we did well at AWS was to work with product teams to help them discover their self-interest in contributing to the projects upon which they were building cloud services, such as Elasticache.


Navigate Serverless Databases: A Guide to the Right Solution

One of the core features of Serverless is the pay-as-you-go pricing. Almost all Serverless databases attempt to address a common challenge: how to provision resources economically and efficiently under uncertain workloads. Prioritizing lower costs may mean consuming fewer resources. However, in the event of unexpected spikes in business demand, you may have to compromise user experience and system stability. On the other hand, more generous and secure resource provisioning leads to resource waste and higher costs. Striking a balance between these two styles requires complex and meticulous engineering management. This would divert your focus from the core business. Furthermore, the Pay-as-you-go billing model has varying implementations in different Serverless products. Most Serverless products offer granular billing based on storage capacity and read/write operations per unit. This is largely possible due to the distributed architecture that allows finer resource scaling. 


Building a Beautiful Data Lakehouse

It’s common to compensate for the respective shortcomings of existing repositories by running multiple systems, for example, a data lake, several data warehouses, and other purpose-built systems. However, this process frequently creates a few headaches. Most notably, data stored in one repository type is often excluded from analytics run on another, which is suboptimal in terms of the results. In addition, having multiple systems requires the creation of expensive and operationally burdensome processes to move data from lake to warehouse if required. To overcome the data lake’s quality issues, for example, many often use extract/transform/load (ETL) processes to copy a small subset of data from lake to warehouse for important decision support and BI applications. This dual-system architecture requires continuous engineering to ETL data between the two platforms. Each ETL step risks introducing failures or bugs that reduce data quality. Second, leading ML systems, such as TensorFlow, PyTorch, and XGBoost, don’t work well on data warehouses. 


How the best CISOs leverage people and technology to become superstars

Exemplary CISOs are also able to address other key pain points that traditionally flummox good cybersecurity programs, such as the relationships between developers and application security (AppSec) teams, or how cybersecurity is viewed by other C-suite executives and the board of directors. For AppSec relations, good CISOs realize that developer enablement helps to shift security farther to the so-called left and closer to a piece of software’s origins. Fixing flaws before applications are dropped into production environments is important, and much better than the old way of building code first and running it past the AppSec team at the last minute to avoid those annoying hotfixes and delays to delivery. But it can’t solve all of AppSec’s problems alone. Some vulnerabilities may not show up until applications get into production, so relying on shifting left in isolation to catch all vulnerabilities is impractical and costly. There also needs to be continuous testing and monitoring in the production environment, and yes, sometimes apps will need to be sent back to developers even after they have been deployed. 


TSA Updates Pipeline Cybersecurity Directive to Include Regular Testing

The revised directive, developed with input from industry stakeholders and federal partners including the Cybersecurity and Infrastructure Security Agency (CISA) and the Department of Transportation, will “continue the effort to reinforce cybersecurity preparedness and resilience for the nation’s critical pipelines”, the TSA said. Developed with input from industry stakeholders and federal partners, including the Cybersecurity and Infrastructure Security Agency (CISA) and the Department of Transportation, the reissued security directive for critical pipeline companies follows the initial directive announced in July 2021 and renewed in July 2022. The TSA said that the requirements issued in the previous years remain in place. According to the 2022 security directive update, pipeline owners and operators are required to establish and execute a TSA-approved cybersecurity implementation plan with specific cybersecurity measures, and develop and maintain a CIRP that includes measures to be taken during cybersecurity incidents. 


What is the cost of a data breach?

"One particular cost that continues to have a major impact on victim organizations is theft/loss of intellectual property," Glenn J. Nick, associate director at Guidehouse, tells CSO. "The media tend to focus on customer data during a breach, but losing intellectual property can devastate a company's growth," he says. "Stolen patents, engineering designs, trade secrets, copyrights, investment plans, and other proprietary and confidential information can lead to loss of competitive advantage, loss of revenue, and lasting and potentially irreparable economic damage to the company." It's important to note that how a company responds to and communicates a breach can have a large bearing on the reputational impact, along with the financial fallout that follows, Mellen says. "Understanding how to maintain trust with your consumers and customers is really, really critical here," she adds. "There are ways to do this, especially around building transparency and using empathy, which can make a huge difference in how your customers perceive you after a breach. If you try to sweep it under the rug or hide it, then that will truly affect their trust in you far more than the breach alone."


Meeting Demands for Improved Software Reliability

“Developers need to fix bugs, address performance regressions, build features, and get deep insights about particular service or feature level interactions in production,” he says. That means they need access to necessary data in views, graphs, and reports that make a difference to their workflows. “However, this data must be integrated and aligned with IT operators to ensure teams are working across the same data sets,” he says. Sigelman says IT operations is a crucial part of an organization’s overall reliability and quality posture. “By working with developers to connect cloud-native systems such as Kubernetes with traditional IT applications and systems of record, the entire organization can benefit from a centralized data and workflow management pane,” he says. From this point, event and change management can be combined with observability instruments, such as service level objectives, to provide not only a single view across the entire IT estate, but to demonstrate the value of reliability to the entire organization.


How will artificial intelligence impact UK consumers lives?

In the next five years, I expect we may see a rise in new credit options and alternatives, such as “Predictive Credit Cards,” where AI anticipates a consumer’s spending needs based on their past behaviour and adjusts the credit limit or offers tailored rewards accordingly. Additionally, fintechs are likely to integrate Large Language Models (LLMs) and add AI to digital and machine-learning powered services. ... Through AI, consumers may also be able to access a better overview of their finances, specifically personalised financial rewards, as they would have access to tools to review all transactions, receive recommendations on personalised spend-based rewards, and even benchmark themselves against other cardholders in similar demographics or industry standards. Consumers may also be able to ask questions and get answers at the click of a button, for example, ‘How much debt do I have compared to your available credit limits?’ or ‘What’s the best way to use my rewards points based on my recent purchases?’ improving financial literacy and potentially providing them with more spending/saving power and personalised experiences in the long run.


IT Strategy as an Enterprise Enabler

IT Strategy is a plan to create an Information Technology capability for maximizing the business value for the organization. IT capability is the Organization ability to meet business needs and improve business processes using IT based systems. The Objective of IT strategy is to spend least amount of resources and generates better ROI. It helps in setting the direction for an IT function in an organization. A successful IT strategy helps the organizations to reduce the operational bottlenecks, realize TCO and derive value from technology. ... IT Strategy definition and implementation covers the key aspects of technology management, planning, governance, service management, risk management, cost management, human resource management, hardware and software management, and vendor management. Broadly, IT Strategy has 5 phases covering Discovery, Assess, Current IT, Target IT and Roadmap. Idea of IT Strategy is to keep the annual and multiyear plan usual, insert the regular frequent check-ins along the way. Revisit IT Strategy for every quarterly or every 6 months to ensure that optimal business value created. 


AI system audits might comply with local anti-bias laws, but not federal ones

"You shouldn’t be lulled into false sense of security that your AI in employment is going to be completely compliant with federal law simply by complying with local laws. We saw this first in Illinois in 2020 when they came out with the facial recognition act in employment, which basically said if you’re going to use facial recognition technology during an interview to assess if they’re smiling or blinking, then you need to get consent. They made it more difficult to do [so] for that purpose. "You can see how fragmented the laws are, where Illinois is saying we’re going to worry about this one aspect of an application for facial recognition in an interview setting. ... "You could have been doing this since the 1960s, because all these tools are doing is scaling employment decisions. Whether the AI technology is making all the employment decisions or one of many factors in an employment decision; whether it’s simply assisting you with information about a candidate or employer that otherwise you wouldn’t have been able to ascertain without advanced machine learning looking for patterns that a human couldn’t have fast enough.



Quote for the day:

"A leader should demonstrate his thoughts and opinions through his actions, not through his words." -- Jack Weatherford

Daily Tech Digest - July 30, 2023

What Is Data Strategy and Why Do You Need It?

Developing a successful Data Strategy requires careful consideration of several key steps. First, it is essential to identify the business goals and objectives that the Data Strategy will support. This will help determine what data is needed and how it should be collected, analyzed, and used. Next, it is important to assess the organization’s current data infrastructure and capabilities. This includes evaluating existing databases, data sources, tools, and processes for collecting and managing data. It also involves identifying current gaps in skills or technology that need to be addressed. Once these foundational elements are in place, organizations can begin to define their approach to Data Governance. This involves establishing policies and procedures for managing Data Quality, security, privacy, compliance, and access. It may also involve developing a framework for decision-making that ensures the right people have access to the right information at the right time. Finally, organizations should consider how they will measure success in implementing their Data Strategy. 


Battling Technical Debt

Technical debt costs you money and takes a sizable chunk of your budget. For example, a 2022 Q4 survey by Protiviti found that, on average, an organization invests more than 30% of its IT budget and more than 20% of its overall resources in managing and addressing technical debt. This money is being taken away from building new and impactful products and projects, and it means the cash might not be there for your best ideas. ... Technical debt impacts your reputation. The impact can be huge and result in unwanted media attention and customers moving to your competitors. In an article about technical debt, Denny Cherry attributes performance woes by US airline Southwest Airlines to poor investment in updating legacy equipment, which caused difficulties with flight scheduling as a result of "outdated processes and outdated IT." If you can't schedule a flight, you're going to move elsewhere. Furthermore, in many industries like aviation, downtime results in crippling fines. These could be enough to tip a company over the edge.


‘Audit considerations for digital assets can be extremely complex’

Common challenges when auditing crypto assets include understanding and evaluating controls over access to digital keys, reconciliations to the blockchain to verify existence of assets, considerations around service providers in terms of qualifications, availability and scope, and forms of reporting, among others. As the technology is rapidly evolving, the regulatory standards do not yet capture all crypto offerings. Everyone is operating in an uncertain regulatory environment, where the speed of change is significant for all participants. If you take accounting standards, for example, a common discussion today is how to measure these assets. Under IFRS, crypto assets are generally recognized as an intangible asset and recorded at cost. While this aligns with the technical requirements of the standards, it sometimes generates financial reporting that may not be well understood by users of the financial information who may be looking for the fair value of these assets.


Does AI have a future in cyber security? Yes, but only if it works with humans

One technique that has been around for a while is rolling AI technology into security operations, especially to manage repeating processes. What the AI does is filter out the noise, identifies priority alerts and screens these out. The other thing it is capable of is capturing this data and being able to look for any anomalies and joining the dots. Established vendors are already providing capabilities like this. Here at Nominet, we have masses of data coming into our systems every day, and being able to look at correlations to identify malicious and anomalous behaviour is very valuable. But once again we find ourselves in the definition trap. Being alerted when rules are triggered is moving towards ML, not true AI. But if we could give the system the data and ask it to find us what looked truly anomalous, that would be AI. Organisations might get tens of thousands of security logs at any point in time. Firstly, how do you know if these logs show malicious activity and if so, what is the recommended course of action? 


Moody’s highlights DLT cyber risks for digital bonds

The body of the paper warns of the cyber risks of smaller public blockchains, which are less decentralized and hence more vulnerable to attacks. It considers private DLTs are more secure than similar (small) sized public blockchains because they have greater access controls. Moody’s acknowledges that larger Layer 1 public blockchains such as Ethereum are far harder to attack, but upgrades to the network carry risks. A major challenge is the safeguarding of private keys. In reality the most significant risks relate to the platforms themselves, bugs in smart contracts and oracles which introduce external data. It notes that currently many solutions don’t have cash on ledger, which reduces the attack surface. In reality this makes them less attractive to attack. As cash on ledger becomes more widespread, this enables greater automation. Manipulating smart contract weaknesses could result in unintended payouts and other vulnerabilities. Moody’s specifically mentions the risks associated with third party issuance platforms such as HSBC Orion, DBS, and Goldman Sachs’ GS DAP.


Cyber Resilience Act: EU Regulators Must Strike the Right Balance to Avoid Open Source Chilling Effect

The good news is that developers are willing to work with regulators in fine-tuning the act. And why not get them involved? They know the industry, count deep insights into prevailing processes and fully grasp the intricacies of open source. Additionally, open source is too lucrative and important to ignore. One suggestion is to clarify the wording. For example, replace “commercial activity” with “paid or monetized product.” This will go some way to narrowing the act’s scope and ensuring that open-source projects are not unnecessarily targeted. Another is differentiating between market-ready software products and stand-alone components, ensuring that requirements and obligations are appropriately tailored. Meanwhile, regulators can provide funding in the legislation to actively support open source. For example, Germany grants resources to support developers in maintaining open-source software projects of strategic importance. A similar sovereign tech fund could prove instrumental in supporting and protecting the industry across the continent.


Organizational Resilience And Operating At The Speed Of AI

The challenge becomes—particularly for mid-market organizations that may not have the resources of their larger competitors—how to corral resources to ensure they can effectively incorporate AI. If businesses are to achieve the kind of organizational resilience that is necessary to build sustainable enterprises, they must accept that AI and automation will fundamentally change company structures, culture and operations. Much of this will require investment in “intangible goods, such as business processes and new skills,” as suggested in the Brookings Institute article, but I would like to add one additional imperative: data gravity. ... To operate at the speed of AI, systems must be able to access all the information within an organization’s disparate IT infrastructure. That data must be secure, have integrity and be without bias. AI requires data agility. Therefore, organizations should employ a data gravity strategy whereby all the data within an organization is consolidated into a central hub, creating a single view of all the information. 


As Ransomware Monetization Hits Record Low, Groups Innovate

With ransomware profits in decline, groups have been exploring fresh strategies to drive them back up. While groups such as Clop have shifted tactics away from ransomware to data theft and extortion, other groups have been targeting larger victims, seeking bigger payouts. Some affiliates have been switching ransomware-as-a-service provider allegiance, with many Dharma and Phobos business partners adopting a new service named 8Base, Coveware says. Numerous criminal groups continue to wield crypto-locking malware. The most number of successful attacks it saw during the second quarter involved either BlackCat or Black Basta ransomware, followed by Royal, LockBit 3.0, Akira, Silent Ransom and Cactus. One downside of crypto-locking malware is that attacks designed to take down the largest possible victims, in pursuit of the biggest potential ransom payment, typically demand substantial manual effort, including hands on keyboard time. Groups may also need to purchase stolen credentials for the target from an initial access broker, pay penetration testing experts or share proceeds with other affiliates.


How Indian organisations are keeping pace with cyber security

Jonas Walker, director of threat intelligence at Fortinet, said the digitisation of retail and the rise of e-commerce makes those sectors susceptible to payment card data breaches, supply chain attacks and attacks targeting customer information. “Educational institutions also hold a wealth of personal information, including student and faculty data, making them attractive targets for data breaches and identity theft,” he added. But enterprises in India are not about to let the bad actors get their way. Sakra World Hospital, for example, has segmented its networks and implemented role-based access, endpoint detection and response, as well as zero-trust capabilities for its internal network. It also conducts vulnerability assessments and penetration tests to secure its external assets. “Zero-trust should be implemented on your external security appliances as well,” he added. “The notification system should be strong and prompt so that action can be taken immediately to mitigate any cyber security risk.”


How Can Blockchain Lead to a Worldwide Economic Boom?

The inherent trustworthiness of distributed ledgers is a key factor here in that they greatly enhance critical economic drivers like supply chain management, land ownership, and the distribution of government and non-government services. At the same time, blockchain’s support of digital currencies provides greater access to capital, in large part by side-stepping the regulatory frameworks that govern sovereign currencies. And perhaps most importantly, blockchain helps to stymie public corruption and the diversion of funds away from their intended purpose, which allows capital and profits to reach those who have earned them and will put them to more productive uses. None of this should imply that blockchain will put the entire world on easy streets. Significant challenges remain, not the least of which is the cost to establish the necessary infrastructure to support secure digital ledgers. Multiple hardened data centers are required to prevent hacking, along with high-speed networks to connect them.



Quote for the day:

"Leadership is a privilege to better the lives of others. It is not an opportunity to satisfy personal greed." -- Mwai Kibaki

Daily Tech Digest - July 29, 2023

A New Paradigm In Cybersecurity

Currently, many enterprises still focus on tower-based security (that is, securing the “home”)—securing networks, datacenter, databases, endpoints, applications and middleware and then applying horizontal security solutions for rights management, policy, access management and authentication. The overarching framework creates a secure environment, but fails to prevent breaches by insiders or by external threat actors that have seized credentials or exploited the hundreds of vulnerabilities in every domain at any given time. The crown jewels—specifically data representing client information, contracts, legal documents, employee information, KYC files and other pertinent information—must remain secure and private, even if the credentials are breached, or vulnerabilities lead to breach in the networks, data centers or endpoints. Particularly in this era of increased cyber risk, along with growing reliance on data across every industry, enterprise leaders should consider this level of security and privacy, and this ease of access, as non-negotiable for keeping their valuable data and digital assets safe.


EU opens Microsoft antitrust investigation into Teams bundling

The European Commission will now carry out an in-depth investigation into whether Microsoft may have breached EU competition rules by tying or bundling Microsoft Teams to its Office 365 and Microsoft 365 productivity suites. “Remote communication and collaboration tools like Teams have become indispensable for many businesses in Europe,” explains Margrethe Vestager, executive vice-president in charge of competition policy at the European Commission. “We must therefore ensure that the markets for these products remain competitive, and companies are free to choose the products that best meet their needs. This is why we are investigating whether Microsoft’s tying of its productivity suites with Teams may be in breach of EU competition rules.” Microsoft has responded to the EU’s complaint. “We respect the European Commission’s work on this case and take our own responsibilities very seriously,” says Microsoft spokesperson Robin Koch, in a statement to The Verge.


How to define your ideal embedded build system

You might think that your software stack has nothing to do with your build system; However, the build configurations you select may dictate how your software is organized. After all, building to simulate your application code shouldn’t include low-level target drivers. In fact, you may find that even the middleware you use is entirely different! Defining your ideal build configurations may impact your software stack and vice versa. A modern embedded software stack will include several layers of independent software that are glued together through HALs and API’s. ... Today, are you using your ideal build system? Do you even know what that ideal build system looks like and the benefits it creates for your team? I highly recommend that you take a few minutes to answer these questions. If you don’t have an ideal build system you’re trying to reach, you can easily define your ideal system in 30 minutes or less. Once you have your ideal build system, look at where your build system is today. If it’s not your ideal system, don’t fret! Define some simple goals with deadlines and work on creating your ideal build system.


The Future of the Enterprise Cloud Is Multi-Architecture Infrastructure

Data centers and the cloud grew up on a single, monolithic approach to computing — one size fits all. That worked in the days when workloads were relatively few and very straightforward. But as cloud adoption exploded, so too have the number and types of workloads that users require. A one-size-fits-all environment just isn’t flexible enough for users to be able to run the types of workloads they want in the most effective and cost-efficient manner. Today, technologies have emerged to overthrow the old paradigm and give developers and cloud providers what they need: flexibility and choice. One of its manifestations is multi-architecture, the ability of a cloud platform or service to support more than one legacy architecture and offer developers the flexibility to choose. Flexibility to run workloads on the architecture of your choice is important for organizations for two reasons: better price performance and — a reason that is far downstream from data centers but nevertheless important — laptops and mobile devices. 


The lost art of cloud application engineering

AI-driven coders learn from existing code repositories. They often need a more contextual understanding of the code generated. They produce code that works but may need help to comprehend or maintain. This hinders developers’ control over their software and often causes mistakes when fixing or changing applications. Moreover, the generated code must meet style conventions or best practices and include appropriate error handling. This can make debugging, maintenance, and collaboration difficult. Remember that AI-driven code generation focuses on learning from existing code patterns to generate net-new code. Generative AI coders have a “monkey see, monkey do” approach to development, whereas the coding approaches are learned from the vast amount of code used as training data. This approach is helpful for repetitive or standard tasks, which is much of what developers do, but enterprises may require more creativity and innovation for complex or unique problems.


Beyond generative AI, data quality a data management trend

"We have seen a belated -- but much required -- resurgence in the interest in data quality and supporting capabilities," Ghai said. "Generative AI may be one of the factors. But also organizations are realizing that their data is the only sustainable moat, and without data quality, this moat is not that strong." Tied in with data quality and observability, which fall under the purview of data governance, Ghai added that data privacy continues to gain importance. "There's a growing focus on building analytics with strong privacy protection via techniques like confidential computing and differential privacy," he said. ... Like Ghai, Petrie noted that many organizations are prioritizing data quality -- and overall data governance -- as their data volume grows and fuels new use cases, including generative AI. "AI creates exciting possibilities," he said. "But without accurate and governed data, it will not generate the expected business results. I believe this realization will prompt companies to renew their investments in product categories such as data observability, master data management and data governance platforms."


What Exactly Does a Head of Enterprise Design Do?

Where the head of the enterprise design sits in the organizational hierarchy depends on various factors, Heiligenthal said. Company size and maturity play a role, but typically, she said, it starts in marketing and lands within product, digital and technology groups. “Organizations can be matrixed or hierarchically structured, and success depends on visibility within the organization. The further the design organization is buried, the harder it is to receive buy-in and get work done,” she explained. A struggle that many enterprises come up against, she said, is how and where the design organization sits. “Is it centralized, decentralized or embedded within products?” Each setup has its own benefits and drawbacks, but what’s worked for her is a centralized partnership; one with a distinct, centralized design team that shares a sense of purpose with various product and business teams. “It’s a hub-and-spoke model that centralizes design teams (the hub) and gives us freedom to execute across the organization (the spokes) based on north-star alignment across the business.”


How Intelligent Automation is Driving Positive Change in the Public Sector

When implemented strategically and with the needs of an organisation at the forefront, intelligent automation can support collaboration, communication and new, modern ways of working, not only contributing to increased efficiency and cost-effectiveness, but also boosting employee experience. In particular, the elimination of repetitive, manual and time-intensive administrative work – which is often cited as one of the main factors of public sector job dissatisfaction – can help with retaining key talent within the sector. Along with meeting employee expectations, strategic deployment of automation also helps public bodies to keep pace with the expectations of the general public. Automation allows citizens to access support more effectively, without the need for human intervention, especially in the case of adult social services, being resource-intensive centres that manage access to help and support across many key areas such as pensions, healthcare and benefits. 


Ethical Debt: Why ESG Matters As Much As Technical Debt

Ethical debt is the combined consequences of a lack of optimization in your organization's environmental, social, and governance (ESG) capabilities. As pressure on organizations to meet ethical standards intensifies, managing ethical debt is becoming essential. ... Just like technical debt, the consequences of ethical debt aren't just financial. Your debt getting out of control can impact your reputation, relationship with regulators, talent retention, and partnerships. Another key consideration of the concept of ethical debt, however, is possible exploitation. When you're creating a product, you now need to consider what the consequences could be if it is misused by your customers. Ethical debt isn't just a theoretical concept. ... Environmental, social, and governance (ESG) ethical debt is much like technical debt. Every organization has a certain amount and getting it down to zero is likely not worth your investment, but your ethical debt must be reduced to a reasonable level or there will be consequences for your organization.


Cloud Computing Services: Revolutionizing Data Management in Telecommunications

In addition to cost savings and scalability, cloud computing services also provide robust data security. With cyber threats becoming increasingly sophisticated, data security is a top priority for telecommunications companies. Cloud service providers employ advanced security measures, including encryption and multi-factor authentication, to protect data from unauthorized access. Furthermore, data stored in the cloud is typically backed up in multiple locations, ensuring its availability even in the event of a disaster. The integration of cloud computing services in telecommunications also facilitates improved collaboration and productivity. With data stored in the cloud, employees can access it from anywhere, at any time, using any device with an internet connection. This promotes a more flexible and efficient work environment, enabling teams to collaborate effectively regardless of their geographical location. The adoption of cloud computing services in telecommunications is also driving innovation. 



Quote for the day:

"Leaders are the ones who keep faith with the past, keep step with the present, and keep the promise to posterity." -- Harold J. Seymour

Daily Tech Digest - July 28, 2023

Cyber criminals pivot away from ransomware encryption

“Data theft extortion is not a new phenomenon, but the number of incidents this quarter suggests that financially motivated threat actors are increasingly seeing this as a viable means of receiving a final payout,” wrote report author Nicole Hoffman. “Carrying out ransomware attacks is likely becoming more challenging due to global law enforcement and industry disruption efforts, as well as the implementation of defences such as increased behavioural detection capabilities and endpoint detection and response (EDR) solutions,” she said. In the case of Clop’s attacks, Hoffman observed that it was “highly unusual” for a ransomware group to so consistently exploit zero-days given the sheer time, effort and resourcing needed to develop exploits. She suggested this meant that Clop likely has a level of sophistication and funding that is matched only by state-backed advanced persistent threat actors. Given Clop’s incorporation of zero-days in MFT products into its playbook, and its rampant success in doing so 


Get the best value from your data by reducing risk and building trust

Data risk is potentially detrimental to the business due to data mismanagement, inadequate data governance, and poor data security. Data risk that isn’t recognized and mitigated can often result in a costly security breach. To improve security posture, enterprises need to have an effective strategy for managing data, ensure data protection is compliant with regulations and look for solutions that provide access controls, end-to-end encryption, and zero-trust access, for example. Assessing data risk is not a tick-box exercise. The attack landscape is constantly changing, and enterprises must assess their data risk regularly to evaluate their security and privacy best practices. Data subject access requests are when an individual submits an inquiry asking how their personal data is harvested, stored, and used. It is a requirement of several data privacy regulations, including GDPR. It is recommended that enterprises automate these data subject requests to make them easier to track, preserve data integrity, and are handled swiftly to avoid penalties.


Why Developers Need Their Own Observability

The goal of operators’ and site reliability engineers’ observability efforts are straightforward: Aggregate logs and other telemetry, detect threats, monitor application and infrastructure performance, detect anomalies in behavior, prioritize those anomalies, identify their root causes and route discovered problems to their underlying owner. Basically, operators want to keep everything up and running — an important goal but not one that developers may share. Developers require observability as well, but for different reasons. Today’s developers are responsible for the success of the code they deploy. As a result, they need ongoing visibility into how the code they’re working on will behave in production. Unlike operations-focused observability tooling, developer-focused observability focuses on issues that matter to developers, like document object model (DOM) events, API behavior, detecting bad code patterns and smells, identifying problematic lines of code and test coverage. Observability, therefore, means something different to developers than operators, because developers want to look at application telemetry data in different ways to help them solve code-related problems.


Understanding the value of holistic data management

Data holds valuable insights into customer behaviour, preferences and needs. Holistic management of data enables organisations to consolidate and analyse their customers’ data from multiple sources, leading to a comprehensive understanding of their target audience. This knowledge allows companies to tailor their products, services and marketing efforts to better meet customer expectations, which can result in improved customer satisfaction and loyalty. Organisations can in some on-market tools draw relationships between their customers to see the physical relationships. Establishing customer relationships can be very beneficial, especially for target marketing. To demonstrate this point, for example, an e-mail arrives in your inbox shortly before your anniversary date, suggesting a specifically tailor-made gift for your partner. It is extremely important for an organisation to have a competitive-edge and to stay relevant. Data that is not holistically managed will slow down the organisation's ability to make timely and informed decisions, hindering its ability to respond quickly to changing market dynamics and stay ahead of its competitors.


Why Today's CISOs Must Embrace Change

While this is a long-standing challenge, I've seen the tide turn over the past four or five years, especially when COVID happened. Just the nature of the event necessitated dramatic change in organizations. During the pandemic, CISOs who said "no, no, no," lost their place in the organization, while those who said yes and embraced change were elevated. Today we're hitting an inflection point where organizations that embrace change will outpace the organizations that don't. Organizations that don't will become the low-hanging fruit for attackers. We need to adopt new tools and technologies while, at the same time, we help guide the business across the fast-evolving threat landscape. Speaking of new technologies, I heard someone say AI and tools won't replace humans, but the humans that leverage those tools will replace those that don't. I really like that — these tools become the "Iron Man" suit for all the folks out there who are trying to defend organizations proactively and reactively. Leveraging all those tools in combination with great intelligence, I think, enables organizations to outpace the organizations that are moving more slowly and many adversaries.


Navigating Digital Transformation While Cultivating a Security Culture

When it comes to security and digital transformation, one of the first things that comes to mind for Reynolds is the tech surface. “As you evolve and transition from legacy to new, both stay parallel running, right? Being able to manage the old but also integrate the new, but with new also comes more complexity, more security rules,” he says. “A good example is cloud security. While it’s great for onboarding and just getting stuff up and running, they do have this concept of shared security where they manage infrastructure, they manage the storage, but really, the IAM, the access management, the network configuration, and ingress and egress traffic from the network are still your responsibility. And as you evolve to that and add more and more cloud providers, more integrations, it becomes much more complex.” “There’s also more data transference, so there are a lot of data privacy and compliance requirements there, especially as the world evolves with GDPR, which everyone hopefully by now knows.


Breach Roundup: Zenbleed Flaw Exposes AMD Ryzen CPUs

A critical vulnerability affecting AMD's Zen 2 processors, including popular CPUs such as the Ryzen 5 3600, was uncovered by Google security researcher Tavis Ormandy. Dubbed Zenbleed, the flaw allows attackers to steal sensitive data such as passwords and encryption keys without requiring physical access to the computer. Tracked as CVE-2023-20593, the vulnerability can be exploited remotely, making it a serious concern for cloud-hosted services. The vulnerability affects the entire Zen 2 product range, including AMD Ryzen and Ryzen Pro 3000/4000/5000/7020 series, and the EPYC "Rome" data center processors. Data can be transferred at a rate of 30 kilobits per core, per second, allowing information extraction from various software running on the system, including virtual machines and containers. Zenbleed operates without any special system calls or privileges, making detection challenging. While AMD released a microcode patch for second-generation Epyc 7002 processors, other CPU lines will have to wait until at least October 2023. 


The Role of Digital Twins in Unlocking the Cloud's Potential

A DT, in essence, is a high-fidelity virtual model designed to mirror an aspect of a physical entity accurately. Let’s imagine a piece of complex machinery in a factory. This machine is equipped with numerous sensors, each collecting data related to critical areas of functionality from temperature to mechanical stress, speed, and more. This vast array of data is then transmitted to the machine’s digital counterpart. With this rich set of data, the DT becomes more than just a static replica. It evolves into a dynamic model that can simulate the machinery’s operation under various conditions, study performance issues, and even suggest potential improvements. The ultimate goal of these simulations and studies is to generate valuable insights that can be applied to the original physical entity, enhancing its performance and longevity. The resulting architecture is a dual Cyber-Physical System with a constant flow of data that brings unique insights into the physical realm from the digital realm.


The power of process mining in Power Automate

Having tools that identify and optimize processes is an important foundation for any form of process automation, especially as we often must rely on manual walkthroughs. We need to be able to see how information and documents flow through a business in order to be able to identify places where systems can be improved. Maybe there’s an unnecessary approval step between data going into line-of-business applications and then being booked into a CRM tool, where it sits for several days. Modern process mining tools take advantage of the fact that much of the data in our businesses is already labeled. It’s tied to database tables or sourced from the line-of-business applications we have chosen to use as systems of record. We can use these systems to identify the data associated with, say, a contract, and where it needs to be used, as well as who needs to use it. With that data we can then identify the process flows associated with it, using performance indicators to identify inefficiencies, as well as where we can automate manual processes—for example, by surfacing approvals as adaptive cards in Microsoft Teams or in Outlook.


Data Program Disasters: Unveiling the Common Pitfalls

In the realm of data management, it’s tempting to be swayed by the enticing promises of new tools that offer lineage, provenance, cataloguing, observability, and more. However, beneath the glossy marketing exterior lies the lurking devil of hidden costs that can burn a hole in your wallet. Let’s consider an example: while you may have successfully negotiated a reduction in compute costs, you might have overlooked the expenses associated with data egress. This oversight could lead to long-term vendor lock-in or force you to spend the hard-earned savings secured through skilful negotiation on the data outflow. This is just one instance among many; there are live examples where organizations have chosen tools solely based on their features and figured lately that such tools needed to fully comply with the industry’s regulations or the country they operate in. In such cases, you’re left with two options: either wait for the vendor to become compliant, severely stifling your Go-To-Market strategy or supplement your setup with additional services, effectively negating your cost-saving efforts and bloating your architecture.



Quote for the day:

"It's very important in a leadership role not to place your ego at the foreground and not to judge everything in relationship to how your ego is fed." -- Ruth J. Simmons