Daily Tech Digest - August 28, 2023

3 keys to making data democratization a reality

The complexity of the modern data stack presents too many opportunities for data sets to fail users, and compromise users’ trust in their data. Companies are using an ever-increasing number of disparate data tools, which in turn increases the number of transformations that data go through. A user accessing data that’s been through multiple transformations needs to know that they can trust that the data is both accurate and true to the data that was originally captured in source systems. Clearly, this is an issue that must be addressed—especially when we consider that this lack of trust eats away at that 32% metric we saw earlier. In reality, that figure is even lower if business users don’t feel that they can trust the data available to them. Ensuring users can trust their data requires a multi-pronged approach that should involve implementing automated data quality software, providing strong data lineage, and establishing data governance policies. As companies work toward data democratization, providing transparency, auditing abilities, and strong data governance can give users greater confidence in the data being analyzed and the insights being derived from it—leading to more widespread data use.


Cybersecurity insurance is missing the risk

The problem is with the nature of the threat. Cyber attackers escalate and adapt quickly, which undermines the historical-based models that insurance companies rely on. Attackers are continually shifting their maneuvers that identify victims, cause increasing loss, and rapidly shift to new areas of impact. Denial of service attacks were once popular but were superseded by data breaches, which cause much more damage. Recently, attackers expanded their repertoire to include ransomware-style attacks that increased the insurable losses ever higher. Trying to predict the cornerstone metrics for actuary modelers – the Annual Loss Expectancy and Annual Rate of Occurrence – with a high degree of accuracy is beyond the current capabilities of insurers. The industry currently conducts assessments for new clients to understand their cybersecurity posture to determine if they are insurable, what should be included/excluded from policies, and to calculate premiums. The current process is to weigh controls against best practices or peers to estimate the security posture of a policyholder.


The Real Business Value of Platform Engineering

One of the biggest obstacles to managing cloud costs is understanding the business context behind resource consumption. Since the platform is the source of all deployments, it can provide end-to-end visibility into environments launched across all phases of the software development life cycle (SDLC). On its own, cloud billing data lacks transparency. Some platforms, however, can expose how cloud costs are incurred. Integrating application infrastructure into a platform can automate tagging as part of the deployment process. This ties usage back to the specific applications, pipelines, stages and teams that they pertain to. Tracking real-time configurations with this kind of business context can help engineering and technology teams make informed decisions about cost optimization and resource consumption. For example, they may be able to pinpoint a person or team that often leaves environments running, and incurring costs, over the weekends or holidays when they are not being used. These insights can inform the implementation of cost-management guardrails and consumption policies.


Beyond talent war: Transform employer-employee relations with tech and innovation

Based on Microsoft's Work Trend Index Annual Report, 51% of Gen-Z employees show a greater inclination towards prioritising health and well-being over work. Their top three priorities include a positive workplace culture, mental health and well-being benefits, and a sense of purpose or meaning. Despite these preferences, many employers have yet to prioritise wellness and purpose effectively. This situation prompts the question: How can HR professionals take action to engage Gen-Z employees who perceive less support and encouragement in their growth? As this lack of support is causing this demographic to reassess the role they envision work playing in their lives. According to the Head HR of Cummins India one must actively address the challenge of Gen-Z feeling less supported. After all, the ultimate outcome we aim for is building a strong sense of connectedness with our employees. However, it's essential to emphasise that connectedness isn't determined by whether interactions are virtual or physical. This challenge stems from the fact that connectedness is highly personal. 


SmokeLoader Trojan Deploys Location-Tracking Malware

The malware scans for a WiFi every 60 seconds and captures geolocation data that could allow threat actors to track the compromised system, according to a report by the researchers at cybersecurity firm Secureworks, who uncovered the novel malware on Aug. 8. "It is unclear how the threat actors use this data. Demonstrating access to geolocation information could be used to intimidate victims or pressure them to comply with demands," researchers said. Google's geolocation API is a service that accepts an HTTPS request with the cell tower and WiFi access points that a mobile client can detect and returns latitude-longitude coordinates. The malware checks for the WLANSVC service on the compromised system that indicates the presence of a wireless capability on a Windows system. "The malware only checks for the service name and does not confirm the service is operational. If the service name does not exist, then the scanner exits. Whiffy Recon persists on the system by creating the wlan.lnk shortcut in the user's Startup folder.


Business Impact: The Power of Data Experiences

Creating a great data experience means having the ability to access pertinent data at any time and from any location. This entails having an ample amount of data to provide meaningful insights, while also ensuring that data access is restricted to what is necessary. These experiences have the potential to greatly minimise manual labour and significantly reduce the amount of additional work required. The relevance of data varies for each individual within an organisation. As an example, field offices dedicate several hours each week to compile data, which is then sent to headquarters where additional time is spent on overall data compilation. By automating data processes, it will liberate numerous hours throughout the entire organisation. Most importantly, there is greater real-time visibility into the operational aspects of the business. Similarly, the speed and method of accessing data will differ among employees. For example, a hybrid worker or frequent traveler may prefer accessing relevant data on a mobile device, while an office-based employee might opt for a laptop.


Why generative AI is a double-edged sword for the cybersecurity sector

With this technology, bad actors will generate unique payloads or attacks designed to evade security defenses that are built around known attack signatures. One way attackers are already doing this is by using AI to develop webshell variants, malicious code used to maintain persistence on compromised servers. Attackers can input the existing webshell into a generative AI tool and ask it to create iterations of the malicious code. These variants can then be used, often in conjunction with a remote code execution vulnerability (RCE), on a compromised server to evade detection. ... In most cases, attackers have tools or plugins written to automate this process. They’re also more likely to use open-source LLMs, as these don’t have the same protection mechanisms in place to prevent this type of malicious behavior and are typically free to use. The result will be an explosion in the number of zero-day hacks and other dangerous exploits, similar to the MOVEit and Log4Shell vulnerabilities that enabled attackers to exfiltrate data from vulnerable organizations.


Product Thinking For Data

Using data products is not just a question of buying a new platform. It has big implications for your organisation’s culture, governance, value delivery, and team structure. The starting point for the culture change is for everyone to think of data in terms of products. This is a big step. A hundred years ago, no one thought of anything in terms of products. Neil H McElroy is credited with inventing the concept of product management at Procter & Gamble in 1931. Since then, the advantages of this way of thinking, as a better way of giving people the material goods that they need, have become clear. Now we are applying this concept to data, but people often don’t naturally think this way about something that is not material. Thinking of data as products encourages a wider perspective on the data asset throughout its full lifecycle, starting from the point of conception all the way towards retirement and decommissioning. It also unlocks access to an expansive repertoire of tools, methodologies and techniques that have been tested and proven to optimise value delivery.


What African CIO clubs do to foster digital talent

More initiatives are springing up to raise awareness of digital technology, which he believes is now part of daily lives. The CIO clubs are certainly a way to help solve the problem. “It’s not uncommon to see these initiatives go even to remote areas in several African countries,” Simba says, adding that CESIA regularly organizes awareness-raising workshops. “The African cybersecurity barometer we publish every year enables us to take stock of the situation, but also to raise awareness across the continent on related issues and thus fight against this digital divide.” For Ebondzo, president of the Congolese CIO Club, this problem is real, but it doesn’t just affect the African continent in particular. “Many countries, including in Europe, are no exception, even if it must be acknowledged that the scale of the phenomenon is not the same everywhere,” she said, reporting that her club trains and supports young people in digital professions, with or without a diploma. “We act by participating as a player in government initiatives to reduce the digital divide such as the Project of Digital Transformation Acceleration Program (PATN), the Universal Electronic Communications Access and Service Fund (FASUCE) and private initiatives.”


The AI Problem We’re Not Taking Seriously Enough

Like a lot of people who have degrees in manpower management, I think unions only result when management loses the trust of their employees. I have belonged to and had to fight unions over the years, so I’m not a fan, but I recognize that when management misbehaves against employees, unions are one of the only powerful defenses that can work at scale. Using the actors and writers strikes as an example, the reasons unions are a problem is that they create a second chain of command not aligned with the business and can drive initiatives that destroy the companies and industries they operate in because their primary tool to elicit a favorable management response is to temporarily shut the business down. This is bad in a competitive environment because customers can’t do business with companies that cannot keep their doors open. Much of manufacturing’s move offshore was the direct result of union actions making labor too expensive domestically. The quickest way to get a union to form is to convince that they are being treated unfairly. Having them train AI tools to replace them would be perceived as incredibly unfair.



Quote for the day:

“None of us can afford to play small anymore. The time to step up and lead is now.” -- Claudio Toyama

Daily Tech Digest - August 24, 2023

3 data privacy principles to adopt now, even while governments still debate

Fairness is one of the most powerful guiding principles any brand can adopt for its use of data, but what does it mean in practice? On the one hand, it’s about considering how you’re using not just data but the tools and technologies that help you harness data in your marketing and decision-making. On the other hand, it’s important to remember we’re not just talking about one moment in time, like the moment when someone gives you their data, or the moment of an interaction between them and you, in a store or on your website. It’s about the potential implications that these moments can have down the line. Could it lead to an unfair, harmful, or discriminatory outcome for them? Could it keep them from getting credit? Or a job offer? Could it perpetuate a stereotype about a protected class of people? Building a foundation of fairness, for example, could mean implementing policies and procedures to regularly assess the data and tech you use to ensure they do not have a disparate impact on vulnerable consumers.


Cyber attackers using Gen AI more effectively than defenders

Both cyber attackers and defenders employ generative AI, but attackers use it more effectively. Adversaries capitalise on AI/ML, deepfake, facial recognition, and Augmented Reality/Virtual Reality (VR) (AR/VR) to enhance hacking strategies against government agencies, businesses, and strategic targets, surpassing cyber defenders in technological adaptation. Facial recognition and AR/VR systems illustrate the extensive use of deepfake technology by cybercriminals. We predict that within two years, social engineering and phishing attacks will predominantly employ deep fakes, making defenders' tasks much harder. Malware capabilities have evolved significantly. Instead of creating static malware, hackers now build multi-behavioural malware that adapts in real-time. Upon reaching a target, this malware assesses the environment and generates tailored malicious code, targeting various systems like Windows, Linux, Outlook, and mobile devices. This is powered by AI/ML engines, resulting in multi-behavioural, metamorphic, and polymorphic malware that dynamically alters their code as they spread.


Cloud Robotics: A New Frontier for Internet Technology

Robots connected to the cloud are being used in warehouses and distribution centers for material handling, order fulfillment, and inventory management duties. These robots are capable of independent navigation, object recognition and picking, and teamwork with human personnel. The medical sector is likewise ripe for transformation because to cloud robots. Robots connected to the cloud can access patient information, medical records, and cutting-edge disease-diagnosis algorithms. Cloud robotics alters how we connect with our domestic environment regarding home automation. Robots with cloud capabilities can automate harvesting, monitor crop health, and manage resource usage in agriculture. These robots can use the cloud to evaluate massive volumes of field data, forecast agricultural yields, and make quick judgments. Cloud robotics has tremendous promise as we look to the future. Advanced artificial intelligence (AI) and cloud robotics are being combined as a new trend, allowing robots to act more intelligently and quickly adapt to their surroundings.


Organizing Around Business Capabilities

A Value Structure is an idealized teaming structure illustrating how the organization delivers benefits to its customers. The idealized structure includes teams and roles to not only operate a capability, but also to build it. We call this structure the value structure to differentiate it from two other structures within an organization: formal structure and learning structure. The formal structure represents the way an organization structures its activities into jobs and job families, manages compensation and other aspects of human resources. The learning structure represents the way an organization learns to improve its performance, including role-based learning, team-based learning, and establishing a culture of relentless improvement without guilt or blame. Establishment of a value structure independent from formal and learning structures enables an organization to begin to change how it delivers value to customers without the overhead of changing formal reporting or job titles. The value structure makes impediments to the flow of value clearly visible so we can either eliminate them or explicitly orchestrate them.


How to Build True Cyber Resilience

Cyber resilience cannot be achieved by implementing one initiative or investing in one new technology. “CISOs should focus on the question, ‘How ready are we?’" says Hopkins. Are organizations ready to detect threats, respond to them, recover, and adapt to an ever-changing threat landscape? “The first step to building cyber resilience involves understanding which cyberattacks are most relevant to an organization based on its industry, location, IT ecosystem, data type, users, etc.,” says Tony Velleca, CISO at digital technology and IT service company UST and CEO of CyberProof, a UST security services company. Once an organization understands its risks, the question becomes how to detect those threats, stop them, and contain them if and when they become cybersecurity incidents. The answer lies in a blend of technology and talent. Combining the power of cybersecurity tools, such as zero trust and managed detection and response, can help organizations achieve cyber resilience, but they need to ensure the strategies they deploy make measurable progress toward that goal.


AI and the evolution of surveillance systems

AI models are influenced by the datasets used to train them. It is imperative that AI vendors carefully tune and balance their datasets to prevent biases from occurring. Balancing datasets is a manual process that requires making sure that the humans visible in the datasets are a good representation of reality, and do not have biases towards certain human traits. In our case, we use diverse groups of actors, from all over the world, to play out violence for our training datasets to ensure they are balanced. Furthermore, testing regularly for such biases can go a long way. A carefully designed system can protect and help people without significantly impacting their privacy. This requires considering privacy from designing to implementing AI systems. I believe that the future of AI-powered surveillance will see reduced privacy infringement. Currently, large surveillance installations still require humans looking at camera streams all the time. In a trigger-based workflow, where humans take actions after an AI has alerted them, the amount of security camera footage seen by humans is much less, and thus the risk of privacy infringement decreases.


Controversial Cybercrime Law Passes in Jordan

A joint statement by Human Rights Watch, Access Now, Article 19, and 11 other organizations said the bill has several provisions threatening freedom of expression, the right to information, and the right to privacy, as well as tightening government control over the Internet. The groups also claimed the bill will introduce new controls over social media, weaken online anonymity, hamper free expression and access to information, and increase online censorship. Meantime the European Union says it recognizes and supports Jordan's objective to create a strong legislative framework to deal with and counter cybercrime efficiently, but it contends that some of the provisions of the new cybercrime law depart from international human rights standards and could result in limiting freedom of expression online and offline. Liz Throssell, the United Nations' spokesperson for the UN High Commissioner for Human Rights, said countries indeed need to take steps to combat cybercrime, but protecting security online and ensuring online freedoms must be treated as complementary goals.


Evaluating Open Source: Green Flags to Look For

First and foremost, is the open-source community for the solution vibrant; is it widely adopted and does the community regularly contribute updates? A healthily engaged community is a sign that the technology has legs and that companies are successful with it; it often indicates the extent to which companies are employing staff to contribute to the community. Closely related to this point, does the open source technology actually solve the problems you need solved? With the enormous popularity of open source comes the enormous hype around novel technologies, but are those technologies actually something that help solve your business problems in a sustainable way such that you can be confident that your investments may carry you several years? You should evaluate the suitability of open source technology in the same way you evaluate proprietary technology and not let the free or low-cost factors lead to hasty decisions. Finally, are vendors providing software, services, and support for the open source technology? 


How Threat Research Can Inform Your Cloud Security Strategy

The most important thing to remember about cybersecurity is that it’s not an action you take, but a practice you follow. Implementing a strong cloud security posture requires regularly assessing and updating your cloud security policies in light of new threats or not. This means being proactive in your protection strategies and planning for the unexpected. Creating an incident response plan is a great place to start, and continuing employee education and training will help embed a security-focused mindset across the organization as a whole. There is no “one right way” to establish a cloud security strategy, but it’s a sure bet that being informed is a good move. Keeping up to date on the latest cybersecurity threats and vulnerabilities through sources like the National Vulnerability Database and Orca Research Pod is a good place to start. However, proactive measures like implementing best practices, organizational training, and even bug bounties and other security policies can go a long way toward creating a well-informed cloud security posture.


Regulatory uncertainty overshadows gen AI despite pace of adoption

In traditional application development, enterprises have to be careful that end users aren’t allowed access to data they don’t have permission to see. For example, in an HR application, an employee might be allowed to see their own salary information and benefits, but not that of other employees. If such a tool is augmented or replaced by an HR chatbot powered by gen AI, then it will need to have access to the employee database so it can answer user questions. But how can a company be sure the AI doesn’t tell everything it knows to anyone who asks? This is particularly important for customer-facing chatbots that might have to answer questions about customers’ financial transactions or medical records. Protecting access to sensitive data is just one part of the data governance picture. “You need to know where the data’s coming from, how it’s transformed, and what the outputs are,” says Nick Amabile, CEO at DAS42, a data consulting firm. “Companies in general are still having problems with data governance.”



Quote for the day:

"The leader has to be practical and a realist, yet must talk the language of the visionary and the idealist." -- Eric Hoffer

Daily Tech Digest - August 23, 2023

“While saying ‘yes’ to a project can seem like the easiest way to spark innovation, the ability to say ‘no’ is vital to ensure companies focus on projects primed to deliver long-term value,” says Prasad Ramakrishnan, CIO at software company Freshworks. “Evaluating these decisions requires a deep understanding of company and stakeholder priorities.” ... IT leaders facing a surfeit of worthwhile technical projects, however, can find themselves in a difficult — and nerve-racking — position, says Barry Shurkey, CIO at NTT Data Services. What if the chosen initiatives don’t work out? What if you make the wrong choice and foregoing other options ends up having a negative or detrimental impact on the business? “Sometimes, this forces CIOs to delve in and quantify the potential success and the impact of failure for each initiative,” Shurkey says. “To enable us to be connected with the pulse of the business and to strike the right balance and prioritize the right projects, it’s also important for IT leaders to build strong relationships with their counterparts in the C-suite and with the next level of leaders in the business functions.”


Software Makers May Face Greater Liability in Wake of MOVEit Lawsuit

The cases come at a pivotal time as the discussion and potential legislation around software vendor liability heats, and the Biden administration ponders its response. The National Cybersecurity Strategy, released by the Biden Administration in March, has acknowledged that under the currently recognized liability paradigm, software vendors are rarely held to account for exploited flaws in their solutions. "Whether under contract, product liability, or common-law negligence theories, software makers to date have been nearly universally successful avoiding meaningful liability," notes Mark Millender, senior advisor, global executive engagement at Tanium, a provider of converged endpoint management The National Cybersecurity Strategy proposes a joint effort between the administration, Congress, and the private sector to develop legislation to establish such liability, a process that will take time but is ultimately necessary, he says. "It is critical to address the lack of accountability to drive the market to produce safer products and services while preserving innovation," Millender says.


Creating a Successful Data Quality Strategy

One of the most powerful ways data quality management teams can build a unified systems mission with upper management is to present data as a product in operations – a thing that can be measured and measured again. “Things that you don’t continue to measure can easily spin out of control: like money, like weight,” Kapoor quipped. However, team members need a clear sense of where to target indicative measurement and locate problem areas in the chain of operations. The team needs to have a realistic vision as it makes timetables for improving data projections. In setting up ongoing data quality metrics that help reveal where data failures recur, Kapoor presented the innovative view that in the end, data is defined by a company’s consumers. “When the data is wrong,” she mused, “who bleeds – the producer or the consumer? The consumer! So they need to become part of the game.” Just as management needs to steer the ship in a way to implement evolving data needs, data quality teams must communicate with consumers in order to look for persistent ways that data fails them. 


How Organisations Can Manage Underperforming Employees

People are happiest in roles where they get ample opportunities to apply themselves and play to their strengths. Underperformance could therefore also be owing to a mismatch in role expectations and deliverables vis-a-vis the strengths of a person. An average performer in one role might do a stellar job in a different role. It is therefore worth the while of the business and HR leaders to look at the competencies and personality traits of the individual and figure whether the employee has been given the right professional opportunities. At times, a small tweak in the current job role or a completely new responsibility might be the right solution to bring about the change from average to good performance, as the employee is able to shift mind share from their development areas and focus on leveraging their strengths. ... If you still fail to see the desired results, create a personalised performance improvement plan and set clear goals for them to achieve in a designated time period. Make sure that the goals are specific enough and are relevant to the organisational objective. 


The Physical Impact of Cyberattacks on Cities

Cities have a multitude of responsibilities, like keeping the lights on, keeping water flowing, keeping EMS staffed and operating, and these functions rely on technology and digital connection to keep themselves running. In essence, every department is its own tech company that is not only susceptible to cyberattacks but can be crippled if an attack is managed properly. Government officials must always have these threats top of mind when planning for attacks, as one seemingly isolated cyber incident can have the power to physically shut down needed resources. Once an attack hits a city, it is difficult for officials to regain the trust of the public. This cannot be seen as simply a byproduct of an attack — reputational impact is often a central goal of bad actors. Ransomware attacks can look like targeted campaigns to discredit a city, which in turn impacts the city's ability to generate revenue with a potential loss in residents and tourists, which are all critical for sustaining a city's viability. 


The CISO Role Transformation: The Shift from Security to Trust Assurance

There is a critical link between trust and revenue, asserting that companies that lead with trust and communicate it effectively go to market with an advantage. This new approach to cybersecurity allows companies to close deals faster, increase customer retention, and reduce the time to renewal. When cybersecurity is aligned with trust, it becomes an integral part of the revenue journey, contributing positively to customer acquisition costs, lifetime value, and overall business performance. ... The conversation shifted to the relationship between the SEC's final ruling on cybersecurity risk management and the concept of trust assurance. Marquez pointed out that while the ruling introduces regulatory requirements for companies to attest to their safety posture, it can be seen as a hammer approach rather than a carrot approach to trust assurance. He emphasized that businesses should proactively embrace trust practices to demonstrate value, rather than only reacting to regulatory pressure.


The IoT security enigma: Safely navigating an interconnected realm

The question of IoT security is a crisis waiting to happen. Inadequate passwords, obsolete software, and absence of proper encryption are an open invitation for hackers to breach sensitive information or seize control of these devices. The fallout can be severe, ranging from identity theft to financial damage and even physical harm. Data privacy is another significant concern. IoT devices amass and generate vast quantities of data, including potentially sensitive information such as location, health data, or financial transactions. Safeguarding this data is paramount to preserving individual privacy and security. Identity theft is another concern. By compromising IoT devices, hackers can gather personal information like login credentials or credit card details, causing chaos for victims. ... The convenience and benefits of the interconnected world are inseparable from cyber threats that call for immediate redress. The principal challenges surrounding IoT security range from a lack of inbuilt security measures to weakly encrypted communication protocols.


4 Popular Master Data Management Implementation Styles

The Registry approach is the dominant one among organizations that deal with many disparate data sources, particularly smaller and mid-sized ones. It works by placing data from all of those sources into one central repository where the data can be cleaned, consolidated, and aligned. Matching algorithms are used to identify and remove duplicates. An advantage of this approach is that the original data isn’t altered—changes are made directly within source systems as opposed to a separate MDM repository. Anyone verifying the truth of data, therefore, can use global identifiers to track it back to the original unaltered source. ... The Coexistence style of MDM implementation enables the MDM hub and the original data sources to all coexist fully in real time. Because there is no delay in updating records from one system to another, the golden record remains accurate at all times—as do the related applications that feed the data—leading to efficiency, timeliness, and complete accuracy.


Balancing risk and compliance: implications of the SEC’s new cybersecurity regulations

Guaranteeing that sensitive information is protected while ensuring companies demonstrate compliance requires the striking of a delicate balance. Consideration of how and when the attorney-client privilege - both the one that belongs to corporate communications and one that can be exclusive to the board - comes into play when conducting internal policy and reporting reviews, preparing draft reports that identify gaps and suggestions for closing them, determining what external vendors to use and communications with them, and related aspects of cyber readiness. ... The new SEC rules signal a shift in corporate cybersecurity management. These rules, although challenging, offer an opportunity for companies to exhibit their commitment to managing these risks. With the right tools, services, and advice, businesses can not only comply with these new rules but also bolster their overall cybersecurity posture, thereby protecting their operations, reputation, and bottom line.


How AI brings greater accuracy, speed, and scale to microsegmentation

Bringing greater accuracy, speed and scale to microsegmentation is an ideal use case for AI, ML and the evolving area of new generative AI apps based on private Large Language Models (LLMs). Microsegmention is often scheduled in the latter stages of a zero trust framework’s roadmap because the large-scale implementation can often take longer than expected. AI and ML can help increase the odds of success earlier in a zero-trust initiative by automating the most manual aspects of implementation. Using ML algorithms to learn how an implementation can be optimized further strengthens results by enforcing the least privileged access for every resource and securing every identity. Forrester found that the majority of microsegmentation projects fail because on-premise private networks are among the most challenging domains to secure. Most organizations’ private networks are also flat and defy granular policy definitions to the level that microsegmentation needs to secure their infrastructure fully.



Quote for the day:

"Good leadership consists of showing average people how to do the work of superior people." -- John D. Rockefeller

Daily Tech Digest - August 20, 2023

Central Bank Digital Currency (CBDC) and blockchain enable the future of payments

CBDC has the potential to transform the future of payments. It can be used to create programmable money that can be spent only on specific things. For example, a government could issue a stimulus package that can only be spent on certain goods and services. This would ensure that the money is spent in the intended manner and would reduce the risk of fraud. Also, CBDC can improve financial inclusion. According to the World Bank, around 1.7 billion people do not have access to basic financial services. CBDC can solve this problem by providing a digital currency that anyone with a smartphone can use, without the need for a bank account. When a CBDC holder uses their phone as a medium for transactions, it becomes crucial to establish a strong link between their digital identity and the device they are using. This link is essential to ensure that the right party is involved in the transaction, mitigating the risk of fraud and promoting trust in the digital financial ecosystem. That said, CBDC and the digital identity can work together to improve financial inclusion.


A statistical examination of utilization trends in decentralized applications

Decentralized applications (dApp) have proliferated in recent years, but their long-term viability is a topic of debate. However, for dApps to be sustainable, and suitable for integration into a larger service networks, they need to attract users and promise reliable availability. Therefore, assessing their longevity is crucial. Analyzing the utilization trajectory of a service is, however, challenging due to several factors, such as demand spikes, noise, autocorrelation, and non-stationarity. In this study, we employ robust statistical techniques to identify trends in currently popular dApps. Our findings demonstrate that a significant proportion of dApps, across a range of categories, exhibit statistically significant positive overall trends, indicating that success in decentralized computing can be sustainable and transcends specific fields. However, there is also a substantial number of dApps showing negative trends, with a disproportionately high number from the decentralized finance (DeFi) category. 


How SaaS Companies Can Monetize Generative AI

Rather than building these models from scratch, many companies elect to leverage OpenAI’s APIs to call GPT-4 (or other models), and serve the response back to customers. To obtain complete visibility into usage costs and margins, each API call to and from OpenAI tech should be metered to understand the size of the input and the corresponding backend costs, as well as the output, processing time and other relevant performance metrics. By metering both the customer-facing output and the corresponding backend actions, companies can create a real-time view into business KPIs like margin and costs, as well as technical KPIs like service performance and overall traffic. After creating the meters, deploy them to the solution or application where events are originating to begin tracking real-time usage. Once the metering infrastructure is deployed, begin visualizing usage and costs in real time as usage occurs and customers leverage the generative services. Identify power users and lagging accounts and empower customer-facing teams with contextual data to provide value at every touchpoint.


“Auth” Demystified: Authentication vs Authorization

There are two technical approaches to modern authorization that are growing ecosystems around them: policy-as-code and policy-as-data. They are similar in that both approaches advocate decoupling authorization logic from the application code. But they also have differences. In policy-as-code systems, the authorization policy is written in a domain-specific language, and stored and versioned in its own repository like any other code. OPA is one well-known example of this approach. It is a CNCF graduated project that is mostly used in infrastructure authorization use cases, such as k8s admission control. It provides a great general-purpose decision engine to enforce authorization logic, and a language called Rego to define that logic as policy. The policy-as-data approach determines access based on relationships between users and the underlying application data. Rather than rely on rules in a policy, these systems use the relationships between subjects (users/groups) and objects (resources) in the application. 


Redefining Software Resilience: The Era of Artificial Immune Systems

Artificial Immune Systems, inspired by the vertebrate immune system, provide an innovative approach to designing self-healing software. By emulating the biological immune system’s ability to adapt, learn, and remember, AIS can empower software systems to detect, diagnose, and fix issues autonomously. AIS offers a framework that enables the software to learn from each interaction, adapt to system changes, and remember past faults and their resolutions. AIS leads to a more robust, resilient system capable of tackling an array of unpredictable errors and vulnerabilities. The vertebrate immune system consists of innate immunity and adaptive immunity. Innate immunity protects us against known pathogens. Innate immunity is always non-specific and general. Present self-healing software models closely resemble innate immunity. Adaptive immunity can learn from current threats and apply the knowledge to handle future situations. At its core, these systems mimic the vertebrate immune system’s differentiation of self and non-self entities.


Europe’s Business Software Startups Prove Resilient: Why?

So what are the factors underpinning the resilience of Europe’s business software sector. One key element of the picture is demand from other tech companies. “Europe’s tech ecosystem is maturing, " says Windsor. “And as the sector matures, companies need tools. Those tools are being supplied by business software companies.” And of course, there is demand from companies outside the tech sector. From banking and financial services to manufacturing, digital transformation is continuing across the economy as a whole creating opportunities for new B2B software providers. But how do European companies take advantage of those opportunities in a market that has been dominated by North American rivals? This isn’t captured in the data, but Windsor sees a home market-first approach, widening out to include new countries and territories as businesses grow. “Anecdotally companies start by selling to their domestic market, then they look at the continent. After that, they expand to other regions.” There is, Windsor adds, a preference for the Asia Pacific. The U.S., on the other hand, remains a difficult market.


Open RAN Testing Expands in the US Amid 5G Slowdown

To be clear, open RAN technology in the US has a number of backers. Dish Network is perhaps the most vocal, having built an open RAN-based 5G network across 70% of the US population. Further, other operators have hinted at their own initial open RAN aspirations, including AT&T and Verizon. Interestingly, the US government has also emerged as a leading proponent for open RAN. For example, the US military continues to fund open RAN tests and deployments. And the Biden administration's NTIA is doling out millions of dollars in the pursuit of open RAN momentum. Broadly, US officials hope to use open RAN technologies to encourage the production of 5G equipment domestically and among US allies, as a lever against China. But open RAN continues to face struggles. For example, US-based open RAN vendors like Airspan and Parallel Wireless have hit hurdles recently. And research and consulting firm Dell'Oro recently reported that open RAN revenue growth slowed to the 10 to 20% range in the first quarter, after more than doubling in 2022.


Low-Code and AI: Friends or Foes?

Although it appears likely that AI will replace low-code, there are actually many opportunities for symbiosis between the two concepts. Rather than eradicate low-code platforms entirely, LLMs will likely become more embedded within them. We’ve already seen this occur as low-code providers like Mendix and OutSystems integrated ChatGPT connectors. Microsoft has also embedded ChatGPT into its Power Platform as well as integrated GPT-driven Copilots into various developer environments. “Low-code and AI on their own are powerful tools to increase enterprise efficiency and productivity,” said Dinesh Varadharajan, the chief product officer at Kissflow. “But there is potential for the combination of both to unlock game-changing automation for almost every industry. The power comes from the congruence between low-code/no-code and AI.” There is also the opportunity to train bespoke LLMs on the inner workings of specific software development platforms, which could generate fully-built templates upon natural language prompts. 

Cloud cost optimization should begin by measuring the drivers of cloud spend at a granular level and then providing full visibility to the teams and organizations that are behind the spend, says Tim Potter, principal, technology strategy and cloud engineering with Deloitte Consulting. “Near-real-time dashboards showing cloud resource utilization, routine reports of cloud consumption, and predictive spend reports will provide application teams and business units with the data needed to take action to optimize cloud costs,” he notes. ... Rearchitecting applications is a frequently overlooked way to achieve the cost and other benefits of transitioning to a cloud model. “Organizations also need to understand the various discount models and select one that optimizes costs yet also provides flexibility and predictability into spending,” says Mindy Cancila, vice president of corporate strategy for Dell Technologies. Cancila adds that organizations should not only consider current workload costs, but also how to manage costs for workloads as they scale over time.


Warning: Attackers Abusing Legitimate Internet Services

Cloud storage platforms, and Google Cloud in particular, are the most exploited, followed by messaging services - most often Telegram, including via its API - as well as email services and social media, the researchers found. Examples of other services being abused by attackers include OneDrive, Discord, Gmail SMTP, Mastodon profiles, GitHub, bitcoin blockchain data, the project management tool Notion, malware analysis site VirusTotal, YouTube comments and even Rotten Tomatoes movie review site profiles. "It is important to note that ransomware campaigns use legitimate cloud storage tools such as mega.io or MegaSync for exfiltration purposes as well," although the crypto-locking malware itself may not be coded to work directly with legitimate tools, the report says. Criminals' choice of service depends on desired functionality. Anyone using an info stealer such as Vidar needs a place to store large amounts of exfiltrated data. The researchers said cloud services' easy setup for less technically sophisticated users makes them a natural fit for such use cases.



Quote for the day:

"We're all passionate about something, the secret is to figure out what it is, then pursue it with all our hearts" -- Gordon Tredgold

Daily Tech Digest - August 19, 2023

Inside the Rise of 'Dark' AI Tools - Scary, But Effective?

This shouldn't be surprising, since building LLMs is an intensive endeavor. "As what WormGPT showed, even with a dedicated team of people, it would take months to develop just one customized language model," Sancho and Ciancaglini said in the report. Once a product launched, service providers would need to fund not just ongoing refinements but also the cloud computing power required to support users' queries. Another challenge for would-be malicious chatbot developers is that widely available legitimate tools can already be put to illicit use. Underground forums abound with posts from users detailing fresh "jailbreaks" for the likes of ChatGPT, designed to evade providers' restrictions, which are designed to prevent the tool from responding to queries about unethical or illegal topics. In his WormGPT signoff earlier this month, Last made the same point, noting that his service was "nothing more than an unrestricted ChatGPT," and that "anyone on the internet can employ a well-known jailbreak technique and achieve the same, if not better, results by using jailbroken versions of ChatGPT."


4 ways simulation training alleviates team burnout

Simulation training boosts confidence because unlike traditional training methods, the learner gains experience over time through true-to-life virtual cyber warfare training and sparring against simulated malicious adversaries that behave like human opponents. By training in the same IT infrastructure they have at their job— complete with networks, servers, and security tools—they improve competencies, judgment skills, and gain “muscle memory” so they feel prepared to respond to a real cyber incident. ... With simulation training, SOC teams learn to identify false positives and high-priority alerts more effectively over time as they become familiar with the types of alerts that end up impacting their organization’s infrastructure. The training can mimic the high volume of alerts they receive during the day and help teams develop effective triage strategies to streamline their response processes. Practicing this in simulation allows teams to experiment on their approach and fine-tune it without fear of making a mistake during operating hours.


A managerial mantra in the age of artificial intelligence

The rise of modern management brought forth professionalism through business schools, advocating ethical standards and fostering professional workplaces globally. Often, this professionalism is rooted in the mastery of managerial principles. These principles are created and taught by a variety of business school professors, and they are developed in close collaboration with executives and leaders. Unfortunately, a lot of these ideas have only been applied sparingly due to practical limitations. These limitations may result from the limited time available for decision-making in the corporate world, the need to manage uncertainties, the lack of data and accurate knowledge of the facts, and occasionally even the ignorance of professional principles. ... Organisational thinkers have traditionally identified that this leads to satisfaction, whereby managers have to be satisfied with the good-enough, not necessarily the best, choice. In other words, constraints on time availability lead a manager to do a limited analysis of the impact of a job candidate on future organisational performance. 


Five Challenges in Implementing AI in Automation

Accuracy and bias are two critical, yet recurring issues in AI that require human supervision. For example, generative AI applications are prone to hallucination, or making up facts based on their training dataset. In the same vein, biased datasets fed into a machine learning model can produce biased results. If a financial services firm is using an AI-driven automated system to accept or reject credit applications, for example, it’s essential to avoid well-documented, systemic biases toward women or people of color that may be contained in the training dataset. As we progress toward AI-driven decision-making, it’s critical for humans to remain in the loop, verifying the results generated by machine learning algorithms to check bias and other forms of inaccuracy. Keeping humans in the loop is a critical step toward re-training algorithms to perform more effectively in a production environment. ... Regulating AI is an ongoing issue globally, and the legal field continues to be shaped by emerging technologies including generative AI. 


Mastering Agile Security: Safeguarding Your Projects in a Fast-Paced World

Just ensuring rapid delivery of the product is not enough. The key to Agile success is to ensure that security is an integral part of the process from the beginning. And since agile is an iterative process, and is all about accommodating changing requirements as and when they arise, security must also be part of this iterative process. Regular security reviews and tests whenever there is a change in the product is the key to delivering a working as well as secure product. ... Agile security is not an impediment to the Agile process; rather, it's an essential component that ensures the final product is robust, resilient, and safeguarded against potential threats. It's not about slowing down development but about integrating security seamlessly into every phase of the project lifecycle. ... At the core of Agile security is the Agile mindset. This mindset emphasizes collaboration, adaptability, and constant improvement. Security is not a one-time event but an ongoing effort that requires the entire team's commitment.


Managing Software Development Team Dynamics from Within

In most cases, the whole team will benefit from trying new tools or services every now and then, just to understand patterns and trends. We know we should always be increasing automation. However, especially with things like JavaScript frameworks, up jumps the New Pusher — too keen to adopt the new when no evidence exists that the gains are worth the disruption cost. Or worse, ignoring the disruption cost entirely. The New Pusher can make the team pine for the road not taken, as opposed to do what they should do, and investigate a little on their own time to see how the team will truly benefit from their shiny find. When thinking about adopting a new tool or service the team should not trial it somewhere inconsequential, as that will be neither conclusive nor beneficial. A short examination or study period should lead to a yes/no decision and the use of the tool or platform somewhere of value. Once the pattern is set, the New Pusher can work to that template. The suspicion that people just want to put new experiences on their CV is a little irrelevant. 


How Generative AI Is Making Data Catalogs Smarter

Sequeda explained how generative AI, which leverages conversational, chat-oriented interfaces to surface results from large language models (LLMs), improves productivity and encourages the adoption of a data catalog. With more traditional data catalogs, administrative tasks require more significant manual interventions, time, and some advanced skills and analysis. Smart catalogs remove these barriers by simplifying and automating some of the administrative workflows. As a result, team members in an organization see faster time to value and find it easier to get started with the catalogs. On the data producers’ end, Sequeda said, “Generative AI automatically enriches metadata around the inputs and provides descriptions and synonyms” in the data catalog, smoothing catalog record creation and upkeep. Also, smart data catalogs give data engineers “code summaries” about catalog queries, reducing the time to do DataOps, including any pipeline malfunctions. Using smart data catalogs, consumers find inspiration when the generative AI suggests alternative queries from previous searches and patterns of results. 


Four Myths About Digital Transformation And How To Debunk Them By Modernizing At The Data Layer

A data fabric architecture is essentially a data mesh with an added “abstraction layer” that virtualizes all data into a centralized platform. The benefit is a single pane of glass for all data, virtualized and contextualized for a broader range of business users to work with. The trade-off is that this sudden visibility can be daunting for DX teams newly tasked with untangling all the previously unseen dependencies, vulnerabilities, governance issues, and compliance or security gaps that suddenly appear. All three approaches remain represented in today’s marketplace for organizations to choose from. And while the calculus for making the choice will vary for each company based on their DX goals and level of technical expertise, a common ingredient to success is to prioritize scalable and repeatable processes through automation and low-code wherever possible. ... Choosing the right underlying data architecture is an ongoing balance of matching the pros and cons of the approach to the specific business and operational needs of the organizations. 


A license to trust: Can you rely on 'open source' companies?

Amanda Brock, OpenUK's CEO, which doesn't have a horse in the IaaS race, appeared disappointed with the company's move. "HashiCorp has always been a true open source company, and what Mitchell Hashimoto and Armon Dadgar achieved from a project never intended to be commercialized has been incredible." Brock then asks, "Taking it to an IPO and seeing Mitchell have the apparent wisdom to step aside and allow a more experienced individual to run HashiCorp – but has that also led to its downfall as an open source company?" Her answer is yes. "The statements about BSL are sadly open-washing. It would be wrong to suggest these two ever intended a bait and switch, but they have indeed switched away from open source. The pressure of enabling their competitors with their innovations – an inevitability of open source – did not align with the need to generate shareholder value." That led her to another, bigger question: How much money is enough? Is a lot of money with others generating a lot of money, too, a reason to stop?" She's left "wondering whether had Mitchell remained CEO, this would have occurred?"


Culture Transformation: What leaders need to know

Fortunately, culture only appears enigmatic: There are practical, tangible, measurable ways leaders can properly manage their culture. And it all starts with alignment. Executives need to be on the same page with their leadership teams -- particularly CHROs -- about where their culture stands today and where it’s headed in the future. You might be thinking: “We’re already aligned about our culture.” But it’s not enough to be generally on the same page. The best leaders are synchronised on specific, seemingly small details about their culture and how they affect performance. In one of our client organisations, the goal of being a high-performance culture is behind all decisions. Every leadership meeting keeps high-performance front and centre in their conversation. For instance, leaders might be on the same page about the core values and beliefs -- such as customer-centricity or excellence in safety outcomes -- that they want their culture to embody. But the best path to excellence varies tremendously by industry, market segment, product and more. 



Quote for the day:

"Success is not a random act. It arises out of a predictable and powerful set of circumstances and opportunities." -- Malcolm Gladwell

Daily Tech Digest - August 18, 2023

Though simpler, India’s data privacy law is stricter than GDPR in some ways

If you think this is all a tough ask, you should know that the law is simpler and less prescriptive than data privacy laws in many countries. This kind of simpler law is appropriate for a country like India for two reasons – one, because India is just starting down the road of data privacy compliance and two – because India has a huge SME sector that would struggle to comply with a more complex law. At the same time, the law is stricter than GDPR in some ways; for example, in the EU, a business that can develop a case for it having a “legitimate interest” to process personal data can do so without consent. This is largely not possible in India. Further, in the EU, a data breach needs to be reported only to the regulator and individuals only where the data fiduciary concludes that the breach could result in a risk to the rights and freedoms of the individual. The government has given itself the power to exempt classes of data fiduciaries from provisions of the law. This includes start-ups, which have been specifically mentioned. 


Exploring Differences Between Diversity and Inclusion

At an organizational level, both diversity and equity can be addressed through recruitment processes, but inclusivity is the most challenging and up to the company as a whole, including all employees. One of the ways to encourage employees to adopt inclusive behavior is through the power of education. When people understand why change is important, they are often more inclined to respond. The word “inclusive” is not a new concept—however, sometimes it is referred to with little substance. Workplaces say they are inclusive because they have a diverse representation of employees, but when you ask the minority groups in that organization if they feel heard, the answer is often conflicting. Rather than playing the game, they feel as though they are mascots or warming the bench. When employees realize inclusion means making sure minority groups feel like they belong, it allows them to assess and challenge their own personal bias, which may be preventing them from fully embracing all perspectives.


Breached for years: How long-term cyber attacks are able to linger

The first step for any cyber criminal looking to pull off a years’ long hack is find a way into a target’s network. Even when organizations make it difficult, there’s usually one entry point. Whether by using initial access brokers (IABs), exploiting vulnerabilities, or using employee credentials – the most effective of the three – they need to get in without tripping any alarms. During the early days of a breach, hackers will do very little other than observe a business and how its people work. They’ll learn all the different processes that staff execute during a typical workday and use that knowledge to mask their movements around the network. There will be no intrusive actions (data exfiltration, vulnerability exploits, lateral movements) until they know how to blend in with everyday traffic being triaged by the organization’s security operations center (SOC) analyst. Attackers usually indulge in one of two methods to remain undetected for extended periods of time. The first is when they use genuine compromised credentials and mimic that employee’s usual behavior 


Tech leaders weigh in on the upside and flipside of generative AI

So if projects are already getting off the ground, what are feelings about where generative AI works best, and how? “The best practises are undoubtedly cross-functional collaboration, ‘try before you buy,’ and learn from what you do,” says Marc O’Brien, CIO at radiology healthcare service provider Medica Group. “In my experience, the algorithms from reputable firms do what they say on the tin but what really matters is where you position in the workflow.” Team Teach’s Ivell believes companies can gain a fast start by using tools being built into applications and suites. “One of the key and immediate opportunities of generative AI is it’s already being built into some tools we already use, be that Power BI, Adobe or more industry-specific apps,” he says. “To take advantage of these needs some internal discovery or analysis of these new functions, understanding how we’d use them, and, in the first instance, training our staff how to exploit the new features. People tend to use tools in the way they always have, and adoption of new features can be slow, so we need to accelerate that.”


6 best practices to defend against corporate account takeover attacks

It’s important to have strong multifactor authentication around all corporate accounts, says Bryan Willett, CISO at Lexmark. "What we’re finding with some of the latest phishing services that are out there, such as EvilProxy, is that they’re getting very good at imitating a login screen that looks just like your corporate login screen and your corporate MFA challenge," Willett says. "And the user has the potential of falling victim to that and sharing their MFA." ... Organizations should also implement contextual access management that considers a user’s current location, the device being used, time of access, network environment, behavior patterns, and other contextual information, according to Halstead. "By doing so, the risk of unauthorized access, often exploited in corporate account takeovers, can be significantly minimized," he says. ... Employee education and awareness are critical, says Halstead. This "human firewall" remains a very important defense in preventing corporate account takeovers.


Make Data Security Training Fun and Engaging with These Tips

What are your employees most interested in? What’s most likely to capture their attention? If you don’t know, ask. Gather insights from employees to identify their current concerns and interests and integrate those into the content. Consider how you could leverage their personal interests in your storytelling approach. For instance, if you have a large base of avid football fans, how might a Super Bowl-themed story or challenge related to data security help capture their interest? Ensure accuracy while entertaining: learning outcomes need to take center stage in your communication efforts, of course. Strive to provide accurate information about cybersecurity and employees’ roles in helping to protect systems and data, while integrating some fun into the delivery of the content. ... Good stories have a protagonist (in this case, the employees), an antagonist (cybercriminals), and some tension that leads to a climax in the plotline. Use these elements to create content that entertains while also illustrating the tangible outcomes and repercussions of poor data security practices, like the potential damage to personal and professional relationships.


Robotic Process Automation: Is Your Job at Risk?

One thing is certain: Change is inevitable. Hairston points to The World Economic Forum’s The Future of Jobs Report 2023, which estimates that 44% of workers’ skills will be disrupted over the next five years. The current pace of technology evolution is transforming jobs faster than ever. IT can either be a key facilitator of the change or a recipient of the change. “In the former case, IT can push the business toward RPA and other automation technologies that are designed to be used by business,” he says. “This will help companies achieve their most strategic objectives and view IT as more of a partner.” As RPA and AI gain stronger footholds, the only way forward is to help displaced team members reskill and upskill, Zhao says. “Fortunately, many online training platforms are available at affordable costs.” To expedite learning, he advises organizations to develop curated content that employees can freely access. Zhao notes that such content should be relevant to both the work being phased out as well as to the tasks that team members will need moving forward. Executive-level sponsorship of any intelligent automation strategy is essential for long-term success.

We think that as general-purpose robots are becoming more common—and they are—people could misuse them. You can find videos online showing how easy it is to attach a weapon to a mobile robot. So, there’s a reasonable concern about who will have access to robots and what they can do with that access. We want to make sure that there will be some regulation around this—and lead the charge in getting it put into place. Policymakers need to get engaged and be informed about the capabilities of the robots, as well as the potential dangers. We are being vocal about our anti-weaponization stance: robots should not cause harm, nor should they impinge upon anybody’s privacy. The industry that we’re hoping to build only exists if people trust robots. If they’re afraid of them, then that’s going to be a problem. ... By managing the final assembly ourselves, we have better control over the quality and cost, and it helps us to rapidly iterate. One of the things we have learned is that when you iterate your design and work with a partner to do the assembly, the communication challenges are pretty thick. 


The Architect’s Guide to Thinking about Hybrid/Multicloud

While most people will tell you complexity is the hardest thing to manage in a multicloud environment, the truth is that consistency is the primary challenge. Having software that can run across clouds (public, private, edge) provides the consistency to manage complexity. Take object storage. If you have a single object store that can run on AWS, GCP, Azure, IBM, Equinix or your private cloud, your architecture becomes materially simpler. Consistent storage and its features (replication, encryption, etc.) enable the enterprise to focus on the application layer. Consistency creates optionality, and optionality creates leverage. Reducing complexity can’t come at some unsustainable cost. By selecting software that runs across clouds (public, private, edge) you reduce complexity and you increase optionality. If it’s cheaper to run that workload on GCP, move it there. If it’s cheaper to run that database on AWS, run it there. If it’s cheaper to store your data on premises and use external tables, do that.


Observability – everything you need to know

The first thing you need to do on any of these platforms is to get your data into it. Historically, for log analytics solutions, like Splunk, that was relatively easy. I don’t mean to trivialise this, but you would grab logs from all of your infrastructure and send those back to Splunk and we process those. You would usually deploy an agent to do it. For observability solutions – not just ours, but any of them – you need more data. In addition to those logs that you capture from each host, you also need system metrics and application metrics and profiles and distributed traces and everything else. There are additional layers of complexity here. Now you’re not just capturing human readable logs from operating systems, you’re capturing all these other types of data from the individual applications that people have written. That requires hooks into all of the hundreds of thousands of libraries that software developers use. I think that historically has held back this industry to a fairly large degree. We rely on a project that I co-founded with a number of other people, and a number of other companies.



Quote for the day:

"I believe it is important for people to create a healthy mental environment in which to accomplish daily tasks." -- Darren L. Johnson