Daily Tech Digest - August 31, 2023

Most hyped network technologies and how to deal with them

Hype four is zero trust. Security is justifiably hot, and at the same time there never seems to be an end to the new notions that come along. Zero trust, according to technologists, is problematic not only because senior management tends to jump on it without thinking, but because there isn’t even a consistent view of the technology being presented. “Trust-washing,” said one professional, “has taken over my security meetings,” meaning that they’re spending too much time addressing all the claims vendors are making. Technologists say the best approach to a project to address this hype issue starts by redefining “zero” trust as “explicit trust” and making it clear that this means that it will be necessary to add tools and processes to validate users, resources, and their relationships. This will mean impacting the line organizations whose users and applications are being protected, in that they will have to define and take the necessary steps to establish trust. Zero-trust enhancements are best implemented through a vendor already established in the security or network connectivity space, so start by reviewing each of the tools available from these incumbent vendors.

Don’t Build Microservices, Pursue Loose Coupling

While it is true that microservices strategies do support loose coupling, they’re not the only way. Simpler architectural strategies can afford smaller or newer projects the benefits of loose coupling in a more sustainable way, generating less overhead than building up microservices-focused infrastructure. Architectural choices are as much about the human component of building and operating software systems as they are about technical concerns like scalability and performance. And the human component is where microservices can fall short. When designing a system, one should distinguish between intentional complexity (where a complex problem rightfully demands a complex solution) and unintentional complexity (where an overly complex solution creates unnecessary challenges). It’s true that firms like Netflix have greatly benefited from microservices-based architectures with intentional complexity. But an up-and-coming startup is not Netflix, and trying to follow in the streaming titan’s footsteps can introduce a great degree of unintentional complexity.

MPs say UK at real risk of falling behind on AI regulation

Noting current trialogue negotiations taking place in the EU over its forthcoming AI Act and the Biden administration's voluntary agreements with major tech firms over AI safety, the SITC chair Greg Clark told reporters at a briefing that time is running out for the government to establish its own AI-related powers and oversight mechanisms. “If there isn’t even quite a targeted and minimal enabling legislation in this session, in other words in the next few months, then the reality [for the introduction of UK AI legislation] is probably going to be 2025,” he said, adding it would be “galling” if the chance to enact new legislation was not taken simply because “we are timed out”. “If the government’s ambitions are to be realised and its approach is to go beyond talks, it may well need to move with greater urgency in enacting the legislative powers it says will be needed.” He further added that any legislation would need to be attuned to the 12 AI governance challenges laid out in the committee’s report, which relate to various competition, accountability and social issues associated with AI’s operation.

The Agile Architect: Mastering Architectural Observability To Slay Technical Debt

Architectural observability requires two other key phases: analysis and observation. The former provides another layer of a deeper understanding of the software architecture, while the latter maintains an updated system picture. These intertwined phases, reflecting Agile methodologies' adaptive nature, foster effective system management. ... The cyclic 'analyzing-observing' process starts with a deep dive into the nitty-gritty of the software architecture. By analyzing the information gathered about the application, we can identify elements like domains within the app, unnecessary code, or problematic classes. Using methodical exploration helps architects simplify their applications and better understand their static and dynamic behavior. The 'observation' phase, like a persistent scout, keeps an eye on architectural drift and changes, helping architects identify problems early and stay up-to-date with the current architectural state. In turn, this information feeds back into further analysis, refining the understanding of the system and its dynamics.

Operation 'Duck Hunt' Dismantles Qakbot

The FBI dubbed the operation behind the takedown "Duck Hunt," a play on the Qakbot moniker. The operation is "the most significant technological and financial operation ever led by the Department of Justice against a botnet," said United States Attorney Martin Estrada of the Central District of California. International partners in the investigation include France, Germany, the Netherlands, the United Kingdom, Romania and Latvia. "Almost every country in the world was affected by Qakbot, either through direct infected victims or victims attacked through the botnet," said senior FBI and DOJ officials. Officials said Qakbot spread primarily through email phishing campaigns, and FBI probes revealed Qakbot infrastructure and victim computers had spread around the world. Qakbot played a role in approximately 40 different ransomware attacks over the past 18 months that caused $58 million in losses, Estrada said. "You can imagine that the losses have been many millions more through the life of the Qakbot," which cyber defenders first detected in 2008, Estrada added. "Today, all that ends," he said.

How CISOs can shift from application security to product security

The fact that product security has worked its way onto enterprise organizational charts is not a repudiation of traditional application security testing, just an acknowledgement that modern software delivery needs a different set of eyes beyond the ones trained on the microscope of appsec testing. As technology leaders have recognized that applications don’t operate in a vacuum, product security has become the go-to team to help watch the gaps between individual apps. Members of this team also serve as security advocates who can help instill security fundamentals into the repeatable development processes and ‘software factory’ that produces all the code. The emergence of product security is analogous to the addition of site reliability engineering early in the DevOps movement, says Scott Gerlach, co-founder and CSO at API security testing firm StackHawk. “As software was delivered more rapidly, reliability needed to be engineered into the product from inception through delivery. Today, security teams typically have minimal interactions with software during development. 

CIOs are worried about the informal rise of generative AI in the enterprise

What can CISOs and corporate security experts do to put some sort of limits on this AI outbreak? One executive said that it’s essential to toughen up basic security measures like “a combination of access control, CASB/proxy/application firewalls/SASE, data protection, and data loss protection.” Another CIO pointed to reading and implementing some of the concrete steps offered by the National Institute of Standards and Technology Artificial Intelligence Risk Management Framework report. Senior leaders must recognize that risk is inherent in generative AI usage in the enterprise, and proper risk mitigation procedures are likely to evolve. Still, another respondent mentioned that in their company, generative AI usage policies have been incorporated into employee training modules, and that policy is straightforward to access and read. The person added, “In every vendor/client relationship we secure with GenAI providers, we ensure that the terms of the service have explicit language about the data and content we use as input not being folded into the training foundation of the 3rd party service.”

Google’s Duet AI now available for Workspace enterprise customers

The launch of Duet AI means Google has beaten Microsoft to market with genAI tools for its office software suite. Microsoft is currently trialing its own Copilot AI assistant for Microsoft 365 applications such as Word, Excel and Teams. The Microsoft 365 Copilot, based on OpenAI’s ChatGPT, will also cost $30 per user each month when it’s made available later this year or in early 2024. “Google's choice to price Duet at $30 is surprising, given that it's the same price as Microsoft Copilot,” said J. P. Gownder, vice president and principal analyst on Forrester's Future of Work team. “Both offerings promise to improve employee productivity, but Google Workspace is positioned as a lower-cost alternative to Microsoft 365 in the first place. Its products contain perhaps 70% to 80% of the features of their counterparts in the Microsoft 365 office programs suite.” However, as with Microsoft’s genAI feature, Gownder expects Duet will provide customers with improvements around productivity and employee experience, even if it’s too early to make firm judgements on either product.

Empowering Female Cybersecurity Talent in the Supply Chain

While young women and other minority individuals today are taught they can have a successful career in any industry, having the right support from educators, peers, and co-workers are key factors in the eventual decision to enter – and stay in – technical fields. Around 74% of middle school females are interested in STEM subjects. Yet, when they reach high school, interest drops, further proving the need for unwavering awareness efforts and support at an early age. According to a recent report from the NSF's NCSES, more women worked in STEM jobs over the past decade compared to previous years, proving progress in the right direction. Despite this increase, a lack of external support and awareness leaves adolescents exploring different paths. Since many decide their majors as early as age 18, promoting technical roles in college can even be considered too late. Therefore, it’s imperative that leaders encourage young talent by communicating and rewarding the skillsets needed to hold these roles and showcase the career paths available.

Machine Learning Use Cases for Data Management

In the financial services sector, ML algorithms in fraud detection and risk assessment are expected to enhance security measures and mitigate potential risks. By leveraging advanced Data Management techniques, ML algorithms can analyze vast amounts of financial data to identify patterns and anomalies that may indicate fraudulent activities. These algorithms can adapt and learn from new emerging fraud patterns, enabling financial institutions to take immediate action. Additionally, ML algorithms can aid in risk assessment by analyzing historical data, market trends, and customer behavior to predict potential risks accurately. ... In the manufacturing sector, ML is revolutionizing quality control and predictive maintenance processes. ML algorithms can analyze vast amounts of data collected from sensors, machines, and production lines to identify patterns and anomalies. This enables manufacturers to detect defects in real-time, ensuring product quality while minimizing waste and rework. Moreover, ML algorithms can predict equipment failures by analyzing historical data on machine performance.

Quote for the day:

"The leader has to be practical and a realist, yet must talk the language of the visionary and the idealist." -- Eric Hoffer

Daily Tech Digest - August 30, 2023

Generative AI Faces an Existential IP Reckoning of Its Own Making

Clearly, this situation is untenable, with a raft of dire consequences already beginning to emerge. Should the courts determine that generative AI firms aren’t protected by the fair use doctrine, the still-budding industry could be on the hook for practically limitless damages. Meanwhile, platforms like Reddit are beginning to aggressively push back against unchecked data scraping. ... These sorts of unintended externalities will only continue to multiply unless strong measures are taken to protect copyright holders. Government can play an important role here by introducing new legislation to bring IP laws into the 21st century, replacing outdated regulatory frameworks created decades before anyone could have predicted the rise of generative AI. Government can also spur the creation of a centralized licensing body to work with national and international rights organizations to ensure that artists, content creators, and publishers are being fairly compensated for the use of their content by generative AI companies.

6 hidden dangers of low code

The low-code sales pitch is that computers and automation make humans smarter by providing a computational lever that multiplies our intelligence. Perhaps. But you might also notice that, as people grow to trust in machines, we sometimes stop thinking for ourselves. If the algorithm says it’s the right thing to do, we'll just go along with it. There are endless examples of the disaster that can ensue from such thoughtlessness. ... When humans write code, we naturally do the least amount of work required, which is surprisingly efficient. We're not cutting corners; we're just not implementing unnecessary features. Low code solutions don’t have that advantage. They are designed to be one-size-fits-all, which in computer code means libraries filled with endless if-then-else statements testing for every contingency in the network. Low code is naturally less efficient because it’s always testing and retesting itself. This ability to adjust automatically is the magic that the sales team is selling, after all. But it’s also going to be that much less efficient than hand-tuned code written by someone who knows the business.

Applying Reliability Engineering to the Manufacturing IT Environment

To understand exposure to failure, the Reliability Engineers analyzed common failure modes across manufacturing operations, utilizing the Failure Mode and Effects Analysis (FMEA) methodology to anticipate potential issues and failures. Examples of common failure modes include “database purger/archiving failures leading to performance impact” and “inadequate margin to tolerate typical hardware outages.” The Reliability Engineers also identified systems that were most likely to cause factory impact due to risk from these shared failure modes. This data helped inform a Resiliency Maturity Model (RMM), which scores each common failure mode on a scale from 1 to 5 based on a system’s resilience to that failure mode. This structured approach enabled us to not just fix isolated examples of applications that were causing the most problems, but to instead broaden our impact and develop a reliability mindset. 

5 Skills All Marketing Analytics and Data Science Pros Need Today

Marketing analysts should hone their skills to know who to talk to – and how to talk to them – to secure the information they have. Trust Insights’ Katie Robbert says it requires listening and asking questions to understand what they know that you need to take back to your team, audience, and stakeholders. “You can teach anyone technical skills. People can follow the standard operating procedure,” she says. “The skill set that is so hard to teach is communication and listening.” ... By improving your communication skills, you’ll be well-positioned to follow Hou’s advice: “Weave a clear story in terms of how marketing data could and should guide the organization’s marketing team.” She says you should tell a narrative that connects the dots, explains the how and where of a return on investment, and details actions possible not yet realized due to limited lines of sight. ... Securing organization-wide support requires leaning into what the data can do for the business. “Businesspeople want to see the business outcomes. 

Neural Networks vs. Deep Learning

Neural networks, while powerful in synthesizing AI algorithms, typically require less resources. In contrast, as deep learning platforms take time to get trained on complex data sets to be able to analyze them and provide rapid results, they typically take far longer to develop, set up and get to the point where they yield accurate results. ... Neural networks are trained on data as a way of learning and improving their conclusions over time. As with all AI deployments, the more data it’s trained on the better. Neural networks must be fine-tuned for accuracy over and over as part of the learning process to transform them into powerful artificial intelligence tools. Fortunately for many businesses, plenty of neural networks have been trained for years – far before the current craze inspired by ChatGPT – and are now powerful business tools. ... Deep learning systems make use of complex machine learning techniques and can be considered a subset of machine learning. But in keeping with the multi-layered architecture of deep learning, these machine learning instances can be of various types and various strategies throughout a single deep learning application.

Ready or not, IoT is transforming your world

At its core, IoT refers to the interconnection of everyday objects, devices, and systems through the internet, enabling them to collect, exchange, and analyze data. This connectivity empowers us to monitor and control various aspects of our lives remotely, from smart homes and wearable devices to industrial machinery and city infrastructure. The essence of IoT lies in the seamless communication between objects, humans, and applications, making our environments smarter, more efficient, and ultimately, more convenient. ... Looking ahead, the future of IoT holds remarkable potential. Over the next five years, we can expect a multitude of advancements that will reshape industries and lifestyles. Smart cities will continue to evolve, leveraging IoT to enhance sustainability, security, and quality of life. The healthcare sector will witness even more personalized and remote patient monitoring, revolutionizing the way medical care is delivered. AI and automation will play a pivotal role, in driving efficiency and innovation across various domains.

What are network assurance tools and why are they important?

Without a network assurance tool at their disposal, many enterprises would be forced to limit their network reach and capacity. "They would be unable to take advantage of the latest technological advancements and innovations because they didn’t have the manpower or tools to manage them," says Christian Gilby, senior product director, AI-driven enterprise, at Juniper Networks. "At the same time, enterprises would be left behind by their competitors because they would still be utilizing manual, trial-and-error procedures to uncover and repair service issues." The popularity of network assurance technology is also being driven by a growing enterprise demand for network teams to do more with less. "Efficiency is needed in order to manage the ever-expanding network landscape," adds Gilby. New devices and equipment are constantly brought online and added to networks. Yet enterprises don’t have unlimited IT budgets, meaning that staffing levels often remain the same, even as workloads increase.

How tomorrow’s ‘smart cities’ will think for themselves

In the smart cities of the future, technology will be built to respond to human needs. Sustainability is the biggest problem facing cities – and by far the biggest contributor is the automobile. Smart cities will enable the move towards reducing traffic, and towards autonomous vehicles directed efficiently through the streets. Deliveries which are not successful the first time are one example. These are a key driver of congestion, as drivers have to return to the same address repeatedly. In a cognitive city, location data that shows when a customer is home can be shared anonymously with delivery companies – with their consent – so that more deliveries arrive on the first attempt. Smart parking will be another important way to reduce congestion and make the streets more efficient. Edge computing nodes will sense empty parking spaces and direct cars there in real-time. They will also be a key enabler for autonomous driving, delivering more data points to autonomous systems in cars. 

Navigating Your Path to a Career in Cyber Security: Practical Steps and Insights

Practical experience is critical in the field of cyber security. Seek opportunities to apply your knowledge and gain hands-on experience as often as you can. I recommend looking for internships, part-time jobs, or volunteer positions that allow you to work on real-world projects and develop practical skills. I cannot stress how important it is to understand the fundamentals. ... Networking is essential for finding job opportunities in any field, including cybersecurity. You should attend industry events and conferences (there are plenty of free ones) and try to meet as many professionals already working in the field as possible. Their insights will go a long way in your journey to finding the right role. There are also many online communities and forums you can join where cyber security experts gather to discuss trends, share knowledge, and explore job opportunities. Networking will help you gain insights, discover job openings, and even receive recommendations from industry professionals.

NCSC warns over possible AI prompt injection attacks

Complex as this may seem, some early developers of LLM-products have already seen attempted prompt injection attacks against their applications, albeit generally these have been either rather silly or basically harmless. Research is continuing into prompt injection attacks, said the NCSC, but there are now concerns that the problem may be something that is simply inherent to LLMs. This said, some researchers are working on potential mitigations, and there are some things that can be done to make prompt injection a tougher proposition. Probably one of the most important steps developers can take is to ensure they are architecting the system and its data flows so that they are happy with the worst-case scenario of what the LLM-powered app is allowed to do. “The emergence of LLMs is undoubtedly a very exciting time in technology. This new idea has landed – almost completely unexpectedly – and a lot of people and organisations (including the NCSC) want to explore and benefit from it,” wrote the NCSC team.

Quote for the day:

"When you practice leadership, the evidence of quality of your leadership, is known from the type of leaders that emerge out of your leadership" -- Sujit Lalwani

Daily Tech Digest - August 29, 2023

The four Es of AI: Keeping up with the trend that never sleeps

You can’t wrangle AI by yourself. Your journey will be fruitful only to the extent that you can instill in those with whom you go to market a digital fluency and a confidence in your ecosystem. Accordingly, many CIOs have fashioned themselves into the de facto AI professor within their organizations—developing 101 materials and conducting roadshows to build awareness, explain how generative AI differs from other types, and discuss its risks. ... To explore is to pose the question: How can I make AI work for my organization?Since the AI landscape is both large and complex, take a two-pronged approach: analyze internally and marry that analysis to marketplace activity. Internally, start by looking at your value chain or the capabilities that deliver your value proposition. Brainstorm how generative AI could make your processes (and the people supporting those processes) more intelligent and productive. If you’re already using AI for some of the use-cases you brainstorm, no matter – record those too.

How financial institutions can reduce security and other risks from MRAs

Regardless of the cause, remediating an MRA is far more costly than strengthening a risk management process at the outset. Some of the risks and costs that a company accepts when it allows an issue to grow into an MRA include: Incident remediation costs - The average cost of a data breach or security incident in the financial industry is much higher than the cost of identifying and remediating vulnerabilities earlier. The financial institution will also need to allocate remediation resources, which could include hiring additional staff, investing in new technology, or engaging third-party consultants to provide expertise and support. Regulatory fines, penalties, and increased scrutiny - If the financial institution fails to address the MRA appropriately, it could face fines and penalties. These fines can range from a few thousand to millions of dollars. Once an MRA is issued, the financial institution may also face increased regulatory scrutiny and oversight.

Adopting and adapting to the industrial metaverse

First, to get all staff members to participate and make use of these new technologies, finding workable approaches to create sufficient incentives and motivation are necessary to get employees involved on all levels of the organisation and across generations. Second, human resources and managers need to ensure that employees will accept these novel solutions and adopt new ways of working. A long list of requirements exists to make workers and employees comfortable in using the industrial metaverse frequently. Worker motivation and the feeling of meaningfulness are important considerations that relate to social aspects of work. On the data side, safety, security, and ethics of data collection, management and sharing already play major roles, but will do so even more in advanced virtual environments. ... Third, corporate leaders need to create a conclusive strategy for employing the industrial metaverse and establishing the required capabilities for implementation, building and maintaining the digital and physical infrastructure.

Embracing Complexity and Emergence in Organisations

Each organisation has its own context and evolves in a slightly different ecosystem with varied dynamics, Hebert said. Almost any behaviour can be successful for a while given the right environment. But most behaviours that are based on a desire for collaboration, communicating better, and with reciprocity in mind seemed to do better in a way that felt more sustainable, especially when being challenged by new circumstances, Hebert added. Psychological safety is critical when dealing with complexity, Hebert said. The only way you can get into that decision-making process that people have is for them to be able to report what they find challenging, risky, difficult, or even easy. This happens when people can do that without fearing retribution or negative consequences, Hebert mentioned. That feedback making it to you also depends on them trusting that giving that information will lead to positive outcomes and will be acted on, not just that nothing bad will happen, he said.

8 Ways Aspiring Leaders Can Establish Executive Presence At Work

"First, speak the languages of the leaders above you. Do they use certain vocabulary? Learn it and use it as well. Second, dress the part, especially on recorded calls. Finally, find a mentor internally who was promoted and learn what they did. Most people are generally open to sharing their tips." ... While most of us tend to shun making mistakes, Galetti suggests that embracing them, even though uncomfortable, encourages career growth, while also fostering a culture of transparency: "Don’t be afraid to make mistakes and fail – and when you do make a mistake, own it, learn from it, and really examine it so you can understand where you can improve in the future. ... Becoming curious as a leader is essential to operating effectively, so you can have a detailed view of the business. "Leaders are never done learning, and they seek diverse perspectives. Asking questions is how we all learn," Galetti adds. Beyond this, seek to ask questions that no one has ever asked or thought of yet.

From Chaos to Clarity: Revolutionize

While data fabric brings numerous benefits to data management and application development, it is important to acknowledge its limitations and challenges. One limitation lies in the complexity of implementing and maintaining data fabric architectures. Integrating various data sources, ensuring data quality and consistency, and managing data governance can be intricate and time-consuming processes. Additionally, data fabric relies heavily on interoperability and compatibility between different systems and technologies, which can present challenges in heterogeneous environments. Furthermore, with data volumes continuing to grow exponentially, data fabric must efficiently handle large datasets and support high-speed data processing. It is expected that by 2025, the volume of data will increase by over 150%, hitting 181 zettabytes, making the insurance of scalability without compromising performance a more demanding task than ever. Nevertheless, data fabric has emerged as a transformative force in the realm of data management and application development. 

Motherboard Mishaps Undermine Trust, Security

The issue is the latest mishap in a stream of problems that have impacted motherboard makers in the past year. In January, a set of five vulnerabilities in firmware used by baseboard management controllers — remote management chips included on many server motherboards — could have allowed remote access over the Internet. In late May, researchers revealed that a backdoor in hundreds of models of motherboards from Gigabyte, intended to allow easier updating, left computers open to attack. ... In the most recent incident, there is not a lot for users to do but beware of using preview versions of Windows on business systems. However, in general, companies should make sure that fundamental security measures, such as Secure Boot, are enabled on their motherboards. In 2023, this should be standard for all motherboards, but at least one researcher discovered that MSI had turned off Secure Boot on some of the motherboard models. In late 2022, Polish security researcher Dawid Potocki discovered that one version of the company's motherboards shipped without Secure Boot.

Managed Apple IDs, iCloud, and the shadow IT connection

Since Managed Apple IDs are organization-owned and IT-managed, the security risks of using iCloud with Managed Apple IDs are basically a non-issue. But even with Managed Apple IDs, data can still get siloed, lost or inaccessible to users. This happens in multiple ways. On-device data gets backed up to iCloud; data can be synced across multiple devices unevenly; and data can be shared via iCloud across multiple users. Should an employee leave a company, there's little concern about them taking corporate data with them; their access to it through their Managed Apple ID is terminated along with access to other enterprise accounts. But if that departing worker is the only one who had that data, others might not know it exists or be able to access it. As users become more trusting of cloud solutions in general, and iCloud in particular, data could gradually move from central repositories to existing only on user devices and in iCloud storage. The result: that data becomes more personal, because its associated with the person doing the job as opposed to the job itself.

The perils of short links

Some link-shortening tools, such as Sniply, offer users more than just shorter links. They allow tracking the actions of link clickers on the actual destination site, which is effectively a man-in-the-middle attack: traffic passes through an intermediate service node that monitors all data exchanged between the user and the destination site. Thus, the URL shortener can intercept anything it wants: entered credentials, social network messages, and so on. ... Using such links, an attacker who already has some information about you can redirect you to a phishing site where your personal data is pre-filled. For example, to a copy of a banking site with a valid username and a request to enter your password, or to the “payment gateway” of some service with your bank card number pre-filled, asking you to enter a security code. What’s more, such links can be used for doxing and other types of tracking, especially if the URL shortener service offers advanced functionality. For instance, our recent post about protecting privacy in Twitch looked in detail at ways to de-anonymize streamers and how to counter them.

ChatGPT – IP and Privacy Considerations

Given the volume of data used to train ChatGPT it is inevitable that some personal data will be included in the dataset. In accordance with Article 6 of the GDPR, the processing of this personal data requires a legal basis to be considered lawful. Given few, if any, data subjects will have provided GDPR standard consent the only legal basis likely to apply is ‘legitimate interest’. Interestingly at the end of March the Italian data protection regulator (the Garante) temporarily blocked ChatGPT due to its inability to establish a legal justification under Article 6 GDPR. However, less than a month later the Garante were satisfied that OpenAI had addressed their immediate concerns, including by improving its privacy notice and the rights it grants to individuals. It therefore re-granted access to the Italian public. ChatGPT continues to raise concerns amongst regulators and academics regarding its data protection policies. The European Data Protection Board has established a taskforce to review ChatGPT’s GDPR compliance. 

Quote for the day:

"Most people work just hard enough not to get fired, and get paid just enough money not to quit." -- George Carlin

Daily Tech Digest - August 28, 2023

3 keys to making data democratization a reality

The complexity of the modern data stack presents too many opportunities for data sets to fail users, and compromise users’ trust in their data. Companies are using an ever-increasing number of disparate data tools, which in turn increases the number of transformations that data go through. A user accessing data that’s been through multiple transformations needs to know that they can trust that the data is both accurate and true to the data that was originally captured in source systems. Clearly, this is an issue that must be addressed—especially when we consider that this lack of trust eats away at that 32% metric we saw earlier. In reality, that figure is even lower if business users don’t feel that they can trust the data available to them. Ensuring users can trust their data requires a multi-pronged approach that should involve implementing automated data quality software, providing strong data lineage, and establishing data governance policies. As companies work toward data democratization, providing transparency, auditing abilities, and strong data governance can give users greater confidence in the data being analyzed and the insights being derived from it—leading to more widespread data use.

Cybersecurity insurance is missing the risk

The problem is with the nature of the threat. Cyber attackers escalate and adapt quickly, which undermines the historical-based models that insurance companies rely on. Attackers are continually shifting their maneuvers that identify victims, cause increasing loss, and rapidly shift to new areas of impact. Denial of service attacks were once popular but were superseded by data breaches, which cause much more damage. Recently, attackers expanded their repertoire to include ransomware-style attacks that increased the insurable losses ever higher. Trying to predict the cornerstone metrics for actuary modelers – the Annual Loss Expectancy and Annual Rate of Occurrence – with a high degree of accuracy is beyond the current capabilities of insurers. The industry currently conducts assessments for new clients to understand their cybersecurity posture to determine if they are insurable, what should be included/excluded from policies, and to calculate premiums. The current process is to weigh controls against best practices or peers to estimate the security posture of a policyholder.

The Real Business Value of Platform Engineering

One of the biggest obstacles to managing cloud costs is understanding the business context behind resource consumption. Since the platform is the source of all deployments, it can provide end-to-end visibility into environments launched across all phases of the software development life cycle (SDLC). On its own, cloud billing data lacks transparency. Some platforms, however, can expose how cloud costs are incurred. Integrating application infrastructure into a platform can automate tagging as part of the deployment process. This ties usage back to the specific applications, pipelines, stages and teams that they pertain to. Tracking real-time configurations with this kind of business context can help engineering and technology teams make informed decisions about cost optimization and resource consumption. For example, they may be able to pinpoint a person or team that often leaves environments running, and incurring costs, over the weekends or holidays when they are not being used. These insights can inform the implementation of cost-management guardrails and consumption policies.

Beyond talent war: Transform employer-employee relations with tech and innovation

Based on Microsoft's Work Trend Index Annual Report, 51% of Gen-Z employees show a greater inclination towards prioritising health and well-being over work. Their top three priorities include a positive workplace culture, mental health and well-being benefits, and a sense of purpose or meaning. Despite these preferences, many employers have yet to prioritise wellness and purpose effectively. This situation prompts the question: How can HR professionals take action to engage Gen-Z employees who perceive less support and encouragement in their growth? As this lack of support is causing this demographic to reassess the role they envision work playing in their lives. According to the Head HR of Cummins India one must actively address the challenge of Gen-Z feeling less supported. After all, the ultimate outcome we aim for is building a strong sense of connectedness with our employees. However, it's essential to emphasise that connectedness isn't determined by whether interactions are virtual or physical. This challenge stems from the fact that connectedness is highly personal. 

SmokeLoader Trojan Deploys Location-Tracking Malware

The malware scans for a WiFi every 60 seconds and captures geolocation data that could allow threat actors to track the compromised system, according to a report by the researchers at cybersecurity firm Secureworks, who uncovered the novel malware on Aug. 8. "It is unclear how the threat actors use this data. Demonstrating access to geolocation information could be used to intimidate victims or pressure them to comply with demands," researchers said. Google's geolocation API is a service that accepts an HTTPS request with the cell tower and WiFi access points that a mobile client can detect and returns latitude-longitude coordinates. The malware checks for the WLANSVC service on the compromised system that indicates the presence of a wireless capability on a Windows system. "The malware only checks for the service name and does not confirm the service is operational. If the service name does not exist, then the scanner exits. Whiffy Recon persists on the system by creating the wlan.lnk shortcut in the user's Startup folder.

Business Impact: The Power of Data Experiences

Creating a great data experience means having the ability to access pertinent data at any time and from any location. This entails having an ample amount of data to provide meaningful insights, while also ensuring that data access is restricted to what is necessary. These experiences have the potential to greatly minimise manual labour and significantly reduce the amount of additional work required. The relevance of data varies for each individual within an organisation. As an example, field offices dedicate several hours each week to compile data, which is then sent to headquarters where additional time is spent on overall data compilation. By automating data processes, it will liberate numerous hours throughout the entire organisation. Most importantly, there is greater real-time visibility into the operational aspects of the business. Similarly, the speed and method of accessing data will differ among employees. For example, a hybrid worker or frequent traveler may prefer accessing relevant data on a mobile device, while an office-based employee might opt for a laptop.

Why generative AI is a double-edged sword for the cybersecurity sector

With this technology, bad actors will generate unique payloads or attacks designed to evade security defenses that are built around known attack signatures. One way attackers are already doing this is by using AI to develop webshell variants, malicious code used to maintain persistence on compromised servers. Attackers can input the existing webshell into a generative AI tool and ask it to create iterations of the malicious code. These variants can then be used, often in conjunction with a remote code execution vulnerability (RCE), on a compromised server to evade detection. ... In most cases, attackers have tools or plugins written to automate this process. They’re also more likely to use open-source LLMs, as these don’t have the same protection mechanisms in place to prevent this type of malicious behavior and are typically free to use. The result will be an explosion in the number of zero-day hacks and other dangerous exploits, similar to the MOVEit and Log4Shell vulnerabilities that enabled attackers to exfiltrate data from vulnerable organizations.

Product Thinking For Data

Using data products is not just a question of buying a new platform. It has big implications for your organisation’s culture, governance, value delivery, and team structure. The starting point for the culture change is for everyone to think of data in terms of products. This is a big step. A hundred years ago, no one thought of anything in terms of products. Neil H McElroy is credited with inventing the concept of product management at Procter & Gamble in 1931. Since then, the advantages of this way of thinking, as a better way of giving people the material goods that they need, have become clear. Now we are applying this concept to data, but people often don’t naturally think this way about something that is not material. Thinking of data as products encourages a wider perspective on the data asset throughout its full lifecycle, starting from the point of conception all the way towards retirement and decommissioning. It also unlocks access to an expansive repertoire of tools, methodologies and techniques that have been tested and proven to optimise value delivery.

What African CIO clubs do to foster digital talent

More initiatives are springing up to raise awareness of digital technology, which he believes is now part of daily lives. The CIO clubs are certainly a way to help solve the problem. “It’s not uncommon to see these initiatives go even to remote areas in several African countries,” Simba says, adding that CESIA regularly organizes awareness-raising workshops. “The African cybersecurity barometer we publish every year enables us to take stock of the situation, but also to raise awareness across the continent on related issues and thus fight against this digital divide.” For Ebondzo, president of the Congolese CIO Club, this problem is real, but it doesn’t just affect the African continent in particular. “Many countries, including in Europe, are no exception, even if it must be acknowledged that the scale of the phenomenon is not the same everywhere,” she said, reporting that her club trains and supports young people in digital professions, with or without a diploma. “We act by participating as a player in government initiatives to reduce the digital divide such as the Project of Digital Transformation Acceleration Program (PATN), the Universal Electronic Communications Access and Service Fund (FASUCE) and private initiatives.”

The AI Problem We’re Not Taking Seriously Enough

Like a lot of people who have degrees in manpower management, I think unions only result when management loses the trust of their employees. I have belonged to and had to fight unions over the years, so I’m not a fan, but I recognize that when management misbehaves against employees, unions are one of the only powerful defenses that can work at scale. Using the actors and writers strikes as an example, the reasons unions are a problem is that they create a second chain of command not aligned with the business and can drive initiatives that destroy the companies and industries they operate in because their primary tool to elicit a favorable management response is to temporarily shut the business down. This is bad in a competitive environment because customers can’t do business with companies that cannot keep their doors open. Much of manufacturing’s move offshore was the direct result of union actions making labor too expensive domestically. The quickest way to get a union to form is to convince that they are being treated unfairly. Having them train AI tools to replace them would be perceived as incredibly unfair.

Quote for the day:

“None of us can afford to play small anymore. The time to step up and lead is now.” -- Claudio Toyama

Daily Tech Digest - August 24, 2023

3 data privacy principles to adopt now, even while governments still debate

Fairness is one of the most powerful guiding principles any brand can adopt for its use of data, but what does it mean in practice? On the one hand, it’s about considering how you’re using not just data but the tools and technologies that help you harness data in your marketing and decision-making. On the other hand, it’s important to remember we’re not just talking about one moment in time, like the moment when someone gives you their data, or the moment of an interaction between them and you, in a store or on your website. It’s about the potential implications that these moments can have down the line. Could it lead to an unfair, harmful, or discriminatory outcome for them? Could it keep them from getting credit? Or a job offer? Could it perpetuate a stereotype about a protected class of people? Building a foundation of fairness, for example, could mean implementing policies and procedures to regularly assess the data and tech you use to ensure they do not have a disparate impact on vulnerable consumers.

Cyber attackers using Gen AI more effectively than defenders

Both cyber attackers and defenders employ generative AI, but attackers use it more effectively. Adversaries capitalise on AI/ML, deepfake, facial recognition, and Augmented Reality/Virtual Reality (VR) (AR/VR) to enhance hacking strategies against government agencies, businesses, and strategic targets, surpassing cyber defenders in technological adaptation. Facial recognition and AR/VR systems illustrate the extensive use of deepfake technology by cybercriminals. We predict that within two years, social engineering and phishing attacks will predominantly employ deep fakes, making defenders' tasks much harder. Malware capabilities have evolved significantly. Instead of creating static malware, hackers now build multi-behavioural malware that adapts in real-time. Upon reaching a target, this malware assesses the environment and generates tailored malicious code, targeting various systems like Windows, Linux, Outlook, and mobile devices. This is powered by AI/ML engines, resulting in multi-behavioural, metamorphic, and polymorphic malware that dynamically alters their code as they spread.

Cloud Robotics: A New Frontier for Internet Technology

Robots connected to the cloud are being used in warehouses and distribution centers for material handling, order fulfillment, and inventory management duties. These robots are capable of independent navigation, object recognition and picking, and teamwork with human personnel. The medical sector is likewise ripe for transformation because to cloud robots. Robots connected to the cloud can access patient information, medical records, and cutting-edge disease-diagnosis algorithms. Cloud robotics alters how we connect with our domestic environment regarding home automation. Robots with cloud capabilities can automate harvesting, monitor crop health, and manage resource usage in agriculture. These robots can use the cloud to evaluate massive volumes of field data, forecast agricultural yields, and make quick judgments. Cloud robotics has tremendous promise as we look to the future. Advanced artificial intelligence (AI) and cloud robotics are being combined as a new trend, allowing robots to act more intelligently and quickly adapt to their surroundings.

Organizing Around Business Capabilities

A Value Structure is an idealized teaming structure illustrating how the organization delivers benefits to its customers. The idealized structure includes teams and roles to not only operate a capability, but also to build it. We call this structure the value structure to differentiate it from two other structures within an organization: formal structure and learning structure. The formal structure represents the way an organization structures its activities into jobs and job families, manages compensation and other aspects of human resources. The learning structure represents the way an organization learns to improve its performance, including role-based learning, team-based learning, and establishing a culture of relentless improvement without guilt or blame. Establishment of a value structure independent from formal and learning structures enables an organization to begin to change how it delivers value to customers without the overhead of changing formal reporting or job titles. The value structure makes impediments to the flow of value clearly visible so we can either eliminate them or explicitly orchestrate them.

How to Build True Cyber Resilience

Cyber resilience cannot be achieved by implementing one initiative or investing in one new technology. “CISOs should focus on the question, ‘How ready are we?’" says Hopkins. Are organizations ready to detect threats, respond to them, recover, and adapt to an ever-changing threat landscape? “The first step to building cyber resilience involves understanding which cyberattacks are most relevant to an organization based on its industry, location, IT ecosystem, data type, users, etc.,” says Tony Velleca, CISO at digital technology and IT service company UST and CEO of CyberProof, a UST security services company. Once an organization understands its risks, the question becomes how to detect those threats, stop them, and contain them if and when they become cybersecurity incidents. The answer lies in a blend of technology and talent. Combining the power of cybersecurity tools, such as zero trust and managed detection and response, can help organizations achieve cyber resilience, but they need to ensure the strategies they deploy make measurable progress toward that goal.

AI and the evolution of surveillance systems

AI models are influenced by the datasets used to train them. It is imperative that AI vendors carefully tune and balance their datasets to prevent biases from occurring. Balancing datasets is a manual process that requires making sure that the humans visible in the datasets are a good representation of reality, and do not have biases towards certain human traits. In our case, we use diverse groups of actors, from all over the world, to play out violence for our training datasets to ensure they are balanced. Furthermore, testing regularly for such biases can go a long way. A carefully designed system can protect and help people without significantly impacting their privacy. This requires considering privacy from designing to implementing AI systems. I believe that the future of AI-powered surveillance will see reduced privacy infringement. Currently, large surveillance installations still require humans looking at camera streams all the time. In a trigger-based workflow, where humans take actions after an AI has alerted them, the amount of security camera footage seen by humans is much less, and thus the risk of privacy infringement decreases.

Controversial Cybercrime Law Passes in Jordan

A joint statement by Human Rights Watch, Access Now, Article 19, and 11 other organizations said the bill has several provisions threatening freedom of expression, the right to information, and the right to privacy, as well as tightening government control over the Internet. The groups also claimed the bill will introduce new controls over social media, weaken online anonymity, hamper free expression and access to information, and increase online censorship. Meantime the European Union says it recognizes and supports Jordan's objective to create a strong legislative framework to deal with and counter cybercrime efficiently, but it contends that some of the provisions of the new cybercrime law depart from international human rights standards and could result in limiting freedom of expression online and offline. Liz Throssell, the United Nations' spokesperson for the UN High Commissioner for Human Rights, said countries indeed need to take steps to combat cybercrime, but protecting security online and ensuring online freedoms must be treated as complementary goals.

Evaluating Open Source: Green Flags to Look For

First and foremost, is the open-source community for the solution vibrant; is it widely adopted and does the community regularly contribute updates? A healthily engaged community is a sign that the technology has legs and that companies are successful with it; it often indicates the extent to which companies are employing staff to contribute to the community. Closely related to this point, does the open source technology actually solve the problems you need solved? With the enormous popularity of open source comes the enormous hype around novel technologies, but are those technologies actually something that help solve your business problems in a sustainable way such that you can be confident that your investments may carry you several years? You should evaluate the suitability of open source technology in the same way you evaluate proprietary technology and not let the free or low-cost factors lead to hasty decisions. Finally, are vendors providing software, services, and support for the open source technology? 

How Threat Research Can Inform Your Cloud Security Strategy

The most important thing to remember about cybersecurity is that it’s not an action you take, but a practice you follow. Implementing a strong cloud security posture requires regularly assessing and updating your cloud security policies in light of new threats or not. This means being proactive in your protection strategies and planning for the unexpected. Creating an incident response plan is a great place to start, and continuing employee education and training will help embed a security-focused mindset across the organization as a whole. There is no “one right way” to establish a cloud security strategy, but it’s a sure bet that being informed is a good move. Keeping up to date on the latest cybersecurity threats and vulnerabilities through sources like the National Vulnerability Database and Orca Research Pod is a good place to start. However, proactive measures like implementing best practices, organizational training, and even bug bounties and other security policies can go a long way toward creating a well-informed cloud security posture.

Regulatory uncertainty overshadows gen AI despite pace of adoption

In traditional application development, enterprises have to be careful that end users aren’t allowed access to data they don’t have permission to see. For example, in an HR application, an employee might be allowed to see their own salary information and benefits, but not that of other employees. If such a tool is augmented or replaced by an HR chatbot powered by gen AI, then it will need to have access to the employee database so it can answer user questions. But how can a company be sure the AI doesn’t tell everything it knows to anyone who asks? This is particularly important for customer-facing chatbots that might have to answer questions about customers’ financial transactions or medical records. Protecting access to sensitive data is just one part of the data governance picture. “You need to know where the data’s coming from, how it’s transformed, and what the outputs are,” says Nick Amabile, CEO at DAS42, a data consulting firm. “Companies in general are still having problems with data governance.”

Quote for the day:

"The leader has to be practical and a realist, yet must talk the language of the visionary and the idealist." -- Eric Hoffer

Daily Tech Digest - August 23, 2023

“While saying ‘yes’ to a project can seem like the easiest way to spark innovation, the ability to say ‘no’ is vital to ensure companies focus on projects primed to deliver long-term value,” says Prasad Ramakrishnan, CIO at software company Freshworks. “Evaluating these decisions requires a deep understanding of company and stakeholder priorities.” ... IT leaders facing a surfeit of worthwhile technical projects, however, can find themselves in a difficult — and nerve-racking — position, says Barry Shurkey, CIO at NTT Data Services. What if the chosen initiatives don’t work out? What if you make the wrong choice and foregoing other options ends up having a negative or detrimental impact on the business? “Sometimes, this forces CIOs to delve in and quantify the potential success and the impact of failure for each initiative,” Shurkey says. “To enable us to be connected with the pulse of the business and to strike the right balance and prioritize the right projects, it’s also important for IT leaders to build strong relationships with their counterparts in the C-suite and with the next level of leaders in the business functions.”

Software Makers May Face Greater Liability in Wake of MOVEit Lawsuit

The cases come at a pivotal time as the discussion and potential legislation around software vendor liability heats, and the Biden administration ponders its response. The National Cybersecurity Strategy, released by the Biden Administration in March, has acknowledged that under the currently recognized liability paradigm, software vendors are rarely held to account for exploited flaws in their solutions. "Whether under contract, product liability, or common-law negligence theories, software makers to date have been nearly universally successful avoiding meaningful liability," notes Mark Millender, senior advisor, global executive engagement at Tanium, a provider of converged endpoint management The National Cybersecurity Strategy proposes a joint effort between the administration, Congress, and the private sector to develop legislation to establish such liability, a process that will take time but is ultimately necessary, he says. "It is critical to address the lack of accountability to drive the market to produce safer products and services while preserving innovation," Millender says.

Creating a Successful Data Quality Strategy

One of the most powerful ways data quality management teams can build a unified systems mission with upper management is to present data as a product in operations – a thing that can be measured and measured again. “Things that you don’t continue to measure can easily spin out of control: like money, like weight,” Kapoor quipped. However, team members need a clear sense of where to target indicative measurement and locate problem areas in the chain of operations. The team needs to have a realistic vision as it makes timetables for improving data projections. In setting up ongoing data quality metrics that help reveal where data failures recur, Kapoor presented the innovative view that in the end, data is defined by a company’s consumers. “When the data is wrong,” she mused, “who bleeds – the producer or the consumer? The consumer! So they need to become part of the game.” Just as management needs to steer the ship in a way to implement evolving data needs, data quality teams must communicate with consumers in order to look for persistent ways that data fails them. 

How Organisations Can Manage Underperforming Employees

People are happiest in roles where they get ample opportunities to apply themselves and play to their strengths. Underperformance could therefore also be owing to a mismatch in role expectations and deliverables vis-a-vis the strengths of a person. An average performer in one role might do a stellar job in a different role. It is therefore worth the while of the business and HR leaders to look at the competencies and personality traits of the individual and figure whether the employee has been given the right professional opportunities. At times, a small tweak in the current job role or a completely new responsibility might be the right solution to bring about the change from average to good performance, as the employee is able to shift mind share from their development areas and focus on leveraging their strengths. ... If you still fail to see the desired results, create a personalised performance improvement plan and set clear goals for them to achieve in a designated time period. Make sure that the goals are specific enough and are relevant to the organisational objective. 

The Physical Impact of Cyberattacks on Cities

Cities have a multitude of responsibilities, like keeping the lights on, keeping water flowing, keeping EMS staffed and operating, and these functions rely on technology and digital connection to keep themselves running. In essence, every department is its own tech company that is not only susceptible to cyberattacks but can be crippled if an attack is managed properly. Government officials must always have these threats top of mind when planning for attacks, as one seemingly isolated cyber incident can have the power to physically shut down needed resources. Once an attack hits a city, it is difficult for officials to regain the trust of the public. This cannot be seen as simply a byproduct of an attack — reputational impact is often a central goal of bad actors. Ransomware attacks can look like targeted campaigns to discredit a city, which in turn impacts the city's ability to generate revenue with a potential loss in residents and tourists, which are all critical for sustaining a city's viability. 

The CISO Role Transformation: The Shift from Security to Trust Assurance

There is a critical link between trust and revenue, asserting that companies that lead with trust and communicate it effectively go to market with an advantage. This new approach to cybersecurity allows companies to close deals faster, increase customer retention, and reduce the time to renewal. When cybersecurity is aligned with trust, it becomes an integral part of the revenue journey, contributing positively to customer acquisition costs, lifetime value, and overall business performance. ... The conversation shifted to the relationship between the SEC's final ruling on cybersecurity risk management and the concept of trust assurance. Marquez pointed out that while the ruling introduces regulatory requirements for companies to attest to their safety posture, it can be seen as a hammer approach rather than a carrot approach to trust assurance. He emphasized that businesses should proactively embrace trust practices to demonstrate value, rather than only reacting to regulatory pressure.

The IoT security enigma: Safely navigating an interconnected realm

The question of IoT security is a crisis waiting to happen. Inadequate passwords, obsolete software, and absence of proper encryption are an open invitation for hackers to breach sensitive information or seize control of these devices. The fallout can be severe, ranging from identity theft to financial damage and even physical harm. Data privacy is another significant concern. IoT devices amass and generate vast quantities of data, including potentially sensitive information such as location, health data, or financial transactions. Safeguarding this data is paramount to preserving individual privacy and security. Identity theft is another concern. By compromising IoT devices, hackers can gather personal information like login credentials or credit card details, causing chaos for victims. ... The convenience and benefits of the interconnected world are inseparable from cyber threats that call for immediate redress. The principal challenges surrounding IoT security range from a lack of inbuilt security measures to weakly encrypted communication protocols.

4 Popular Master Data Management Implementation Styles

The Registry approach is the dominant one among organizations that deal with many disparate data sources, particularly smaller and mid-sized ones. It works by placing data from all of those sources into one central repository where the data can be cleaned, consolidated, and aligned. Matching algorithms are used to identify and remove duplicates. An advantage of this approach is that the original data isn’t altered—changes are made directly within source systems as opposed to a separate MDM repository. Anyone verifying the truth of data, therefore, can use global identifiers to track it back to the original unaltered source. ... The Coexistence style of MDM implementation enables the MDM hub and the original data sources to all coexist fully in real time. Because there is no delay in updating records from one system to another, the golden record remains accurate at all times—as do the related applications that feed the data—leading to efficiency, timeliness, and complete accuracy.

Balancing risk and compliance: implications of the SEC’s new cybersecurity regulations

Guaranteeing that sensitive information is protected while ensuring companies demonstrate compliance requires the striking of a delicate balance. Consideration of how and when the attorney-client privilege - both the one that belongs to corporate communications and one that can be exclusive to the board - comes into play when conducting internal policy and reporting reviews, preparing draft reports that identify gaps and suggestions for closing them, determining what external vendors to use and communications with them, and related aspects of cyber readiness. ... The new SEC rules signal a shift in corporate cybersecurity management. These rules, although challenging, offer an opportunity for companies to exhibit their commitment to managing these risks. With the right tools, services, and advice, businesses can not only comply with these new rules but also bolster their overall cybersecurity posture, thereby protecting their operations, reputation, and bottom line.

How AI brings greater accuracy, speed, and scale to microsegmentation

Bringing greater accuracy, speed and scale to microsegmentation is an ideal use case for AI, ML and the evolving area of new generative AI apps based on private Large Language Models (LLMs). Microsegmention is often scheduled in the latter stages of a zero trust framework’s roadmap because the large-scale implementation can often take longer than expected. AI and ML can help increase the odds of success earlier in a zero-trust initiative by automating the most manual aspects of implementation. Using ML algorithms to learn how an implementation can be optimized further strengthens results by enforcing the least privileged access for every resource and securing every identity. Forrester found that the majority of microsegmentation projects fail because on-premise private networks are among the most challenging domains to secure. Most organizations’ private networks are also flat and defy granular policy definitions to the level that microsegmentation needs to secure their infrastructure fully.

Quote for the day:

"Good leadership consists of showing average people how to do the work of superior people." -- John D. Rockefeller