Daily Tech Digest - February 14, 2023

Will your incident response team fight or freeze when a cyberattack hits?

CISOs shouldn’t be surprised to hear that even well-prepared teams can have moments of paralysis; it’s just human nature, McKeown says. She says sometimes responders may experience cognitive narrowing, where they’re so focused on the situation directly in front of them that they can’t consider the full circumstances—an experience that can stop responders from thinking as they normally would. Niel Harper, an enterprise cybersecurity leader who serves as a board director with the governance association ISACA, witnessed a team freeze in response to a ransomware attack on his first day working with a company as an advisor. “They literally did not know what to do, even though they had some experience with [incident response] walkthroughs,” he recalls. “They were in panic mode.” Harper says he has seen other situations where the response was stymied and thus delayed. In some cases, teams were afraid that they’d be seen as overreacting. In others, they were paralyzed with the fear of being blamed.


What Kind of Glasses Are You Wearing? Your View of Risk May Be Your Biggest Risk of All

Consider an organization that is focusing on increasing revenue by expanding outbound sales in new territories in the European market. A compliance-focused organization might conduct an internal assessment of EU General Data Protection Regulation (GDPR) requirements, determine if there are current controls in place to meet them and report metrics indicating the organization is compliant. However, a risk-focused enterprise begins by assessing the unique threats within the region and determining the risk factors that could prevent the organization from conducting sales in Europe. Wearing risk-colored glasses empowers risk professionals to proactively monitor and communicate risk in a context their organization will understand. Viewing business outcomes from this perspective enables organizational leadership to prioritize investments and agree on a suitable level of protection.


Australian organisations underinvesting in cyber security

The underinvestment was more stark among small companies, of which 69% had not invested enough in cyber security, according to the study conducted by Netskope, a supplier of secure access service edge (SASE) services. Major data breaches over the past year, however, have cast the spotlight on cyber security, with over three-quarters (77%) of 300 respondents who participated in the study noting that their leadership’s awareness of cyber threats had increased. Some 70% also noted an increase in their leadership’s willingness to bolster investments – the proportion of organisations that are planning bigger cyber security budgets between 2022 and 2023 jumped to 63%, compared with 45% that saw increases between 2020 and 2022. This increase is most pronounced among larger organisations with over 200 employees, where over 80% are increasing cyber security budgets. Among small firms with fewer than 20 employees, 41% planned to spend more on cyber security between 2022 and 2023, up from just 23% between 2020 and 2022.


What Is Zero-Knowledge Encryption?

Zero-knowledge encryption is not a specific encryption protocol, but a process that focuses on preserving a user’s data privacy and security to the maximum extent. In order for a service to be truly zero-knowledge, a user’s data must be encrypted before it leaves the device, while it’s being transferred, and when it is stored on an external server. This is because modern encryption in general is incredibly effective at barring unauthorized participants from decoding encrypted data. It’s functionally impossible to crack modern-day encryption using brute-force approaches. However, for ease of use and UX benefits, many service providers also hold a user’s encryption key—introducing an additional point of failure that’s attractive for malicious actors because service providers often hold many user keys. There are a variety of benefits (and also detriments) when service providers share knowledge of an encryption key, but it also means that someone other than the user can decrypt the data—which makes it not zero-knowledge.


Google Touts Web-Based Machine Learning with TensorFlow.js

Why Do ML over the Web? First off, he mentioned privacy. One common use case is for processing sensor data in ML workloads — such as data from a webcam or microphone. Using TensorFlow.js, Mayes said, “none of that data goes to the cloud […] it all happens on-device, in-browser, in JavaScript.” For this reason, TensorFlow.js is being used by companies doing remote healthcare, he said. Another privacy use case is human-computer interaction. “With some of our models, we can do body pose estimation, or body segmentation, face keypoint estimation, all that kind of stuff,” Mayes said. Lower latency is another reason to do ML in the browser, according to Mayes. “Some of these models can run over 120 frames per second in the browser, on an NVIDIA 1070 let’s say,” he said. “So that’s kind of [an] old generation graphics card and [yet it’s] still pushing some decent performance there.” Cost was his third reason, “because you’re not having to hire and run expensive GPUs and CPUs in the cloud and keep them running 24/7 to provide a service.”


Solidifying Risk Management: How to Get Started With Continuous Monitoring

Continuous monitoring entails understanding not only the risks you’re facing now and those visible on the horizon, but also the risks beyond the horizon. This requires recognizing risk velocity, acknowledging risk volatility, and developing and deploying a mechanism by which you can periodically check in on, and be alerted to, key risks. The key is to think differently, and to use your 360° view of your organization to develop strategies that help you simultaneously plan and execute in coordination and ongoing communication with first- and second-line roles. ... KRIs are crucial for continuous monitoring, helping companies be more proactive in identifying potential impacts. KRIs are selected and designed by analyzing risk-related events that may affect the organization’s ability to achieve its objectives. Typically, by looking at risk events that have impacted the organization (in the past or currently), it’s possible to work backward to pinpoint the root-cause or intermediate events that led to them.


Using the blockchain to prevent data breaches

A primary reason for the increase in data breaches is over-reliance on centralized servers. Once consumers and app users enter their personal data, it’s directly written into the company’s database, and the user doesn’t get much say in what happens to it afterward. Even if users attempt to limit the data the company can share with third parties, there will be loopholes to exploit. As the Facebook–Cambridge Analytica data-mining scandal showed, the results of such centralization can be catastrophic. Additionally, even assuming goodwill, the company’s servers could still get hacked by cybercriminals. In contrast, blockchains are decentralized, immutable records of data. This decentralization eliminates the need for one trusted, centralized authority to verify data integrity. Instead, it allows users to share data in a trustless environment. Each member has access to their own data, a system known as zero-knowledge storage. This also makes the network less likely to fall victim to hackers. Unless they bring down the whole network simultaneously, the undamaged nodes will quickly detect the intrusion.


Companies serious about customer privacy in 2023 will start with data security

This should be viewed as an opportunity, rather than yet another compliance burden for boards to manage. In fact, cyber executives are increasingly viewing data privacy laws and regulations as an “effective tool for reducing cyber risks … despite the challenges associated with compliance”, according to the World Economic Forum. But to improve privacy protections, those same executives must begin by enhancing security. Why? Because you can have security without privacy, but never privacy without security. Privacy is the right for data subjects to control how their personal information is collected, stored and used. Fail to secure this data and others could access and use it unlawfully. In these terms, data security is an essential prerequisite for protecting customers’ privacy rights. It’s telling that the name for data privacy day in the EU is Data Protection Day. Without adequate “technical and organisational measures” as cited in Popia, true data privacy will always be out of reach.


Healthcare in the Crosshairs of North Korean Cyber Operations

In addition to obfuscating their involvement by operating with other affiliates and foreign third parties, North Korean actors frequently use fake domains, personas, and accounts to execute their campaigns, CISA and the others said. "DPRK cyber actors will also use virtual private networks (VPNs) and virtual private servers (VPSs) or third-country IP addresses to appear to be from innocuous locations instead of from DPRK." The advisory highlighted some of newer software vulnerabilities that state-backed groups in North Korea have been exploiting in their ransomware attacks. Among them were the Log4Shell vulnerability in the Apache Log4j framework (CVE-2021-44228) and multiple vulnerabilities in SonicWall appliances. CISA's recommended mitigations against the North Korean threat included stronger authentication and access control, implementing the principle of least privilege, employing encryption and data masking to protect data at rest, and securing protected health information during collection, storage, and processing.


Three ideal scenarios for anomaly detection with machine learning

When detecting anomalies, the typical way to go in many business areas was traditionally based on predetermined rules. For example, a fraud detection system could spot suspicious card payments which greatly exceeded a spending threshold. The main problem with this approach is its lack of flexibility, given that the set of rules must be continuously updated to cope with ever-evolving scenarios, such as anomalous activity due to a new type of malware. Here lies machine learning’s full potential. Any system fuelled with this technology can digest enormous datasets, autonomously identify recurring patterns and cause/effect relationships among the data analysed, and create models portraying these connections. In addition, when properly trained, such models will be capable of processing additional data to make predictions, further refining their skills through experience as they consume more and more information.



Quote for the day:

"Don't necessarily avoid sharp edges. Occasionally they are necessary to leadership." -- Donald Rumsfeld

Daily Tech Digest - February 13, 2023

Mergers and Acquisitions in Healthcare: The Security Risks

Incidents such as the CommonSpirit ransomware attack highlight the critical importance for entities to carefully assess and address potential IT security risks involving a potential merger or acquisition, experts say. "We are seeing that well-established health systems or entities that have very mature cybersecurity programs take on an entity which is less secure," says John Riggi, national adviser for cybersecurity and risk at the American Hospital Association. The association advises hospital mergers to treat cyber risk with the same priority as financial analysis in a merger. But determining and identifying the array of systems and myriad of devices used by another healthcare entity that's being acquired is not easy. "When you buy an organization, you typically don't know everything you're buying," says Kathy Hughes, CISO of New York-based Northwell Health, which has 21 hospitals and over 550 outpatient facilities, many of which were acquired by the organization, which is the result of a 1997 merger between North Shore Health System and Long Island Jewish Medical Center.


Forget ChatGPT vs Bard, The Real Battle is GPUs vs TPUs

Solving for efficient matrix multiplication can cut down on the amount of compute resources required for training and inferencing tasks. While other methods like quantisation and model shrinking have also proven to cut down on compute, they sacrifice on accuracy. For a tech giant creating a state-of-the-art model, they’d rather spend the $5 million, if there’s no way to cut costs.  ... NVIDIA’s GPUs were well-suited to matrix multiplication tasks due to their hardware architecture, as they were able to effectively parallelise across multiple CUDA cores. Training models on GPUs became the status quo for deep learning in 2012, and the industry has never looked back. Building on this, Google also launched the first version of the tensor processing unit (TPU) in 2016, which contains custom ASICs (application-specific integrated circuits) optimised for tensor calculations. In addition to this optimisation, TPUs also work extremely well with Google’s TensorFlow framework; the tool of choice for machine learning engineers at the company.


As Digital Trade Expands, Data Governance Fragments

The upshot is that we are still far from any more global efforts. Even preliminary convergence on national laws about data protection and privacy between the United States and the European Union is difficult to achieve. Instead, Aaronson advocated for the establishment of a new international organization that could provide proper incentives to, and pay, global firms to share data. Overall, the panellists urged that technical discussions of data flows, data governance and rules for digital trade be contextualized within fundamental concerns about the nature of data and the role of human rights. These concerns equally require attention and governance. The discussion on effective digital governance requires a fundamental rethink of the nature of data. As emphasized by panellist Kyung Sin Park, data embeds fundamental human freedoms and human information. It is closely linked to human rights. Data is much more than an economic asset used in training artificial intelligence (AI) algorithms.


Fall in Love with the Problem, Not the Solution: A Handbook for Entrepreneurs

Think of a problem—a big problem, something worth solving, something that would make the world a better place. Ask yourself, who has this problem? If you happen to be the only person on the planet with this problem, then go to a shrink. It’s much cheaper and easier than building a startup. But if a lot of people have this problem, go and speak with those people to understand their perception of the problem. Know the reality, and only then start building the solution. If you follow this path and your solution works, it’s guaranteed to create value. But there is a more important part to this. Imagine speaking with people and their feedback is, yeah, go ahead and solve that for me—this is a big problem. All of a sudden you feel committed to this journey. You essentially fall in love with the problem. Falling in love with the problem dramatically increases your likelihood of being successful because the problem becomes the north star of your journey, keeping you focused.


Data Mobility Framework: Expert Offers Four Keys to Know

It’s common for hybrid work teams to schedule when employees will be in the office and when they’ll work remotely. But while remote workers don’t always work from the same home office, they do expect similar access to business data and applications regardless of the network or device they’re using—and all of this remote connectivity has a material impact on data storage demands. Organizations try to balance data storage initiatives to address this without causing downtime to mission-critical applications and data. The faster organizations can add new storage or move data non-disruptively to another location, the better services they can deliver to end-users. Thankfully, the right data migration partner can perform these critical services non-disruptively in a matter of hours. This enables the organization and its partners to access a range of capabilities to minimize data migration efforts, including being able to migrate “hot data” to a new, more powerful array without downtime. Hot data is any data that is in constant demand, such as a database or application that’s essential for your business to operate.


Stop Suffocating Success! 7 Ways Established Businesses Can Start Thinking Like a Startup.

Startups aren't trapped by old rules—they're in the process of inventing themselves. Obviously, established companies can't just completely throw out the rulebook. But remember rules should exist to help, not just because they've always been there. Otherwise, people wind up blindly following often annoying processes without thinking about the end goal. For example, if multiple clients ask for a product feature that hasn't been included, but there isn't a feature review meeting until the next quarter, does it make sense to follow the rules and wait? Or should staff be empowered to add the feature (or, at least, fast-track a product review)? Beware of any policy that exists because "We've-always-done-things-this-way." ... Incompetent workers can take a terrible toll. To start, everything's harder when the people around you don't carry their weight. It's also demoralizing—you're working so hard and hitting all your goals, while the person next to you fails spectacularly and apparently isn't penalized for it. Over time, you're likely to grow bitter or just stop trying so hard since results clearly don't matter.


The Stubborn Immaturity of Edge Computing

Of course, they don’t even think of it as “the edge”. To them, it’s where real work takes place. So when IT vendors and cloud providers and carriers talk about the “far edge” (where real customers and real factories and real work takes place), that makes no sense to people outside of IT vendors’ data-center-centric bubble. The real world doesn’t revolve around the data center, or the cloud. What’s really far in the real world? The cloud. The data center. Edge computing is a technology style that’s part of a digital transformation trend. Digital transformation has been on a march for decades, well before we called it that. It’s accelerated because of cloud computing, and global connectivity. A lot of the technology transformation has been taking place at the back-end. In data centers, in business models. And there’s a lot left to be done. But the true green field in digital transformation is where people and things and factories actually exist. (OK, we’ll call that the “edge”, but that’s such an old IT-centric way of talking!)


How the Future of Work Will Be Shaped by No Code AI

No-code, like other breakthroughs, is a thrilling disruption and improvement in the software development process, particularly for small firms. Among its various applications, no-code has enabled users with little technical experience to create applications using pre-built frameworks and templates, which will undoubtedly lead to further inventions and design and development in the digital town square. It also cuts down on software development time, allowing for faster implementation of business solutions. Aside from the time saved, no-code can enhance computer and human resources by transferring these duties to software suppliers. ... No-code is also a game changer for many AI technology developers and non-technical people since it focuses on something we never imagined possible in the difficult field of artificial intelligence: simplicity. Anyone will be able to swiftly build AI apps using no-code development platforms, which provide a visual, code-free, and easy-to-use interface for deploying AI and machine learning models.


Code Readability vs Performance: Here is The Verdict

Code performance is critical, especially when working on projects that require high-speed computation and real-time processing. This can result in slow and sluggish user experiences. But focusing on the performance of a code that is not readable is useless. Moreover it can also be prone to bugs and errors. Performance is a quirky thing. Starting to write a code with performance as the first priority is not a path that any developer would take, or even recommend. In a Reddit thread, a developer gives an example of a code that compiles in 1 millisecond, and the other code in 0.1 millisecond. No one can really notice the difference between both the models as long as the code is “fast enough”. So improving the performance and focusing on it, while sacrificing the readability of the code can be counterproductive. Moreover, in the same Reddit thread, another developer pointed out that writing faster algorithms actually requires you to write harder code oftentimes, which again sacrifices the readability. 


LockBit Group Goes From Denial to Bargaining Over Royal Mail

LockBit's about-face - "it wasn't us" to "it was us" - is a reminder that ransomware groups will continue to lie, cheat and steal, so long as they can profit at a victim's expense. Isn't hitting a piece of Britain's critical national infrastructure - as in, the national postal service - risky? After DarkSide hit Colonial Pipeline in the United States in May 2021, for example, the group first blamed an affiliate before shutting down its operations and later rebooting under a different name. While hitting CNI might seem like playing with fire, many security experts' consensus is that ransomware groups' target selection remains opportunistic. Both operators and any affiliates who use their malware, as well as the initial access brokers from whom they often buy ready-made access to victims' networks, seem to snare whoever they can catch and then perhaps prioritize victims based on size and industry. What's notable isn't necessarily that LockBit - or one of its affiliates - hit Royal Mail, but that it decided to press the attack. 



Quote for the day:


“None of us can afford to play small anymore. The time to step up and lead is now.” -- Claudio Toyama

Daily Tech Digest - February 11, 2023

How Modern Enterprise Architecture Drives Enterprise Success

The need for cross-functional, distributed technology ownership is the perfect ecosystem for the enterprise architect to shine. Traditionally, EA has governed and exerted control over technology. However, as this becomes decentralized across the business and, thus, influenced by EAs, innovations become collaborative rather than authoritarian. Leaders can leverage the Enterprise Architect in their change initiatives. By pooling data across departments, executives can support their assertions on which areas of the business would benefit most from a change in design. Modeling projects means they can be compared simultaneously, helping to uncover which strategy will yield the greatest returns. EA-driven roadmaps can help leaders consider KPIs a year or ten years into the future of the business, made more solid by knowledge distilled directly from those closest to the respective tools and processes they use. Technology acquisition and skill dependencies cross product boundaries, regulatory compliance processes intersect many different processes, enterprise-wide cost objectives span multiple siloes, and cybersecurity threats could rear up anywhere.


What Is Augmented Data Management?

Augmented master data management applies ML and AI techniques to the management of master data. This enables companies to refine master data to achieve two key objectives: to optimize their business operations to run more efficiently and transform their businesses to drive growth. In terms of business optimization, this can be achieved in several ways using augmented master data management. First, there’s enhanced efficiency. With augmented master data management, companies can streamline business processes, which reduces the time needed to work on activities, thereby increasing efficiency and potentially leading to cost savings. Augmented master data management can also improve the ability to comply with regulations. The number of regulations and demands on companies has been increasing; for example, we’ve seen this lately with ESG reporting and privacy laws. From a data perspective, it can be tedious and complex for companies to comply with these regulations.


Data is a stumbling block for most multicloud deployments

Migrating data from one cloud service to another can be challenging. It is important to have a solid data portability strategy in place that considers data format, size, and dependencies. Most of those moving to multicloud can’t answer this question: “What would it take to migrate this data set from here to there?” This needs to be in your back pocket, as we’re seeing some data sets move from single and multicloud deployments back to on premises. You must give yourselves options. ... Managing data across multiple cloud services can be a resource-intensive task if you attempt to do everything manually. It is essential to have a centralized data management system in place that can handle diverse data sources and ensure data consistency. Again, this needs to be centralized, abstracted above the public cloud providers and native data management implementations. You need to deal with data complexity on your terms, not the terms of the data complexity itself. Most are opting for the latter, which is a huge mistake.


Understanding the Role of CIOs in Test Data Management

A zero-trust framework is a cyber security approach wherein all users and systems are not trusted and under authentication before granting access. Only users who are verified by the protocol get access to the designs. It is a great leap over traditional cybersecurity models that primarily operate on assumptions. To fully achieve the zero trust frameworks, automation of test data through a DevOps Platform is performed. Given the responsibility of implementing cyber security throughout the enterprise, a CIO is influential here. The leader has to roll out the DevSecOps approach that integrates cyber security from the beginning. The real challenge here is to build a culture of not treating security as an afterthought. It should be a part of the SDLC and a CIO should educate all stakeholders about the same. With DevSecOps, they work towards pipelining the DevOps pieces with security protocols. A CIO has to upgrade an enterprise’s approach towards risk and security in test data and perfect the delivery pipeline of qualitative data sets for different environments. 


A step-by-step guide to setting up a data governance program

Data governance is a crucial aspect of managing an organization’s data assets. The primary goal of any data governance program is to deliver against prioritized business objectives and unlock the value of your data across your organization. Realize that a data governance program cannot exist on its own – it must solve business problems and deliver outcomes. Start by identifying business objectives, desired outcomes, key stakeholders, and the data needed to deliver these objectives. Technology and data architecture play a crucial role in enabling data governance and achieving these objectives. Don’t try to do everything at once! Focus and prioritize what you’re delivering to the business, determine what you need, deliver and measure results, refine, expand, and deliver against the next priority objectives. A well-executed data governance program ensures that data is accurate, complete, consistent, and accessible to those who need it, while protecting data from unauthorized access or misuse.


Why Businesses Need To Think Bigger When It Comes To Automation

Far too often, automation projects are approached as siloed, one-off opportunities to fine-tune a specific business process or function. Maybe it's the introduction of a conversational AI tool to improve front-line customer support functions or the development of a new payment processing or credit decisioning solution to build out a digital payments infrastructure. Whatever the specific use case, automation projects that approach a singular vision to make one part of the business faster or more efficient regularly fail because they simply aren't big enough. To really extract value from automation, these projects need to start with an enterprise-wide vision and break down the walls between data, analytics, digital and operational teams to redefine business processes across multiple functions. Put simply, businesses need to stop thinking about automation and start focusing on hyperautomation. Businesses that understand this distinction and embrace automation not as a focused cost-cutting project but as an opportunity to transform legacy business and IT processes into a fully synchronized, smart workflow should be well positioned to confront the challenges of the current marketplace.


Surge of swatting attacks targets corporate executives and board members

The way it works in this new corporate swatting surge is that the malicious actors go to the websites of corporations, identify the top executives and board members, and with lists in hand, visit the websites of data brokers such as 411.com, Spokeo, and others. While there, the swatters grab whatever they can – names, addresses, phone numbers, email addresses, whatever is available. It is a "one-stop shop for finding the locations of executives and corporate officers," says Pierson. Alternatively, the threat actors plumb the archives of content aggregated from thousands of data breaches over the years. The swatter can easily find out that an executive "ordered new jogging shorts or whatever" and where those shorts were shipped, he says. Once the cybercriminals have that information, they do one of two things: use synthesized voice devices or make robotic recordings and call the police. The messages generally focus on a hostage or murder situation. 


Governance, Processes and Planning: Three Significant Countermeasures to Being Hacked

Together, governance, processes and planning help organizations to effectively manage and protect their digital assets by providing a clear framework for decision-making, establishing clear procedures for incident response and risk management, and developing a comprehensive security strategy that aligns with the organization’s overall goals and priorities. Through governance, processes and planning, your organization can start to fix the people vulnerability. So how can your organization develop and implement governance, processes and planning countermeasures? ISACA’s CMMI Cybermaturity Platform is a great place to start. The CMMI Cybermaturity Platform will help your organization identify what it does well and where your weaknesses are. The CMMI Cybermaturity Platform also aids your organization in showing where your gaps are in governance, processes and planning, three often overlooked critical countermeasures to hacking. The CMMI Cybermaturity Platform is an easy-to-use architecture model that simplifies identifying gaps in new or existing cybersecurity programs.


What is predictive analytics? Transforming data into future insights

Predictive analytics makes looking into the future more accurate and reliable than previous tools. As such it can help adopters find ways to save and earn money. Retailers often use predictive models to forecast inventory requirements, manage shipping schedules, and configure store layouts to maximize sales. Airlines frequently use predictive analytics to set ticket prices reflecting past travel trends. Hotels, restaurants, and other hospitality industry players can use the technology to forecast the number of guests on any given night in order to maximize occupancy and revenue. By optimizing marketing campaigns with predictive analytics, organizations can also generate new customer responses or purchases, as well as promote cross-sell opportunities. Predictive models can help businesses attract, retain, and nurture their most valued customers. Predictive analytics can also be used to detect and halt various types of criminal behavior before any serious damage is inflected. 


5 top workforce concerns for CIOs

Mental health has become a critical concern in the workplace, particularly as the global workforce resets following the COVID-19 pandemic. The prolonged stress and uncertainty of the past several years, coupled with a shaky economy, has taken a toll on many employees, and they are looking for support and resources to help them cope. A key part of supporting employee mental health is eliminating the stigma around taking time to recharge mentally. ... High employee engagement is an ongoing objective for organizations, as engaged employees are more likely to be productive, motivated, and committed to their work. However, employee engagement has steadily declined throughout the Great Resignation, “quiet quitting,” and recent tech layoffs. To boost engagement, focus on creating a positive and supportive work environment, fostering open communication, and providing opportunities for employee growth and development. Also, seek feedback from employees to understand their needs and concerns and work to address them.



Quote for the day:

"Great leaders go forward without stopping, remain firm without tiring and remain enthusiastic while growing" -- Reed Markham

Daily Tech Digest - February 10, 2023

Getting Started with Design Thinking for Developers and Managers

More visual, intuitive types of designers do still sometimes struggle working with developers; they expect developers to just accept their intuitive conclusions. But developers in general don't go for touchy-feely intuitive design. They want more logical reasons for design choices. Besides, true design thinking goes beyond intuition, leveraging measurement and analysis too. It's therefore generally a bad idea for a more intuitive, visually oriented designer to lead a design team containing developers. While those designers are valuable members of a design thinking team, their disconnect with developers and their bias towards intuition over analysis means they should not be running the show. ... Visual designers have a vested interest in fostering that impression. Some developers are happy to go along with it because it gives them an excuse to delegate any responsibilities concerning design.It's also a simple and tangible concept for managers to get their heads around. The visual nature is something they can see immediately. More sophisticated forms of design take more effort to understand.


How Cybercriminals Are Operationalizing Money Laundering and What to Do About It

Financial institutions, cryptocurrency companies, and other organizations face increasing fines — sometimes ranging in the millions and billions of dollars — for failure to root out money laundering as government agencies and regulators worldwide seek to crack down on this scourge. ... A preferred tactic by cybercriminal organizations looking to grow their ranks is to use what are known as money mules. These are individuals who are brought in to help launder money — sometimes, unknowingly. They're often lured in under false pretenses and promises of legitimate jobs, only to discover that "job" is to help launder the profits from cybercrime. Back in the day, this money shuffling was typically done through anonymous wire transfer services. While they often got away with it, such transfers are far easier for law enforcement and regulators to track. These days, most criminals have moved to using cryptocurrency. Its relative lack of regulatory oversight, coupled with often-anonymous transactions, make it almost the ideal vehicle for money laundering. 


Solving Problems With The IoT

Despite security concerns, the IoT is so useful that it continues to by leaps and bounds — so much so that when ChatGPT, a new AI search engine, was asked to list the top 100 applications for the IoT, the search engine simply added the word “smart” in front of many common places and items. For example, it responded with “smart aquariums, smart theme parks, smart libraries,” etc. Put simply, the IoT is everywhere. What makes it so popular is its ability to solve problems. For instance, safety is critical in manufacturing, industrial, chemical processing, mining, and many other applications. IoT sensors can be used to monitor environments for the presence of hazardous chemicals. If there is a gas leak, a real-time alert can be sent to the control centers to prevent potential accidents from occurring. In addition, aging infrastructure such as bridges, buildings, highways, and power grids pose risks. To help mitigate these risks, sensors in an IoT network can track cement movement and the changing size of cement cracks. IoT monitoring of the moisture in some building structures can provide advance warning of potential disasters such as collapsing buildings and bridges.


MoD issues revised cloud strategy as it prepares to move top-secret data off-premise by 2025

The Department will be pursuing a multi-cloud approach to sourcing these off-premise capabilities, because no one supplier will be able to address the “complexity of Defence’s requirements” nor its “evolving ambition” or scalability demands, according to the document. “By 2025, the services required by game-changing military capabilities will be available across Defence, accelerating our level of cloud consumption,” the document continued. “We will take advantage of evergreen solutions to prevent future obsolescence, and to ensure immediate access to the latest technologies, driving the pace of modernisation. “By 2025, we will use cloud platforms as the foundation on which to build capabilities in big data, advanced analytics, automation and synthetics. We will spend the majority of our compute expenditure investing in strategic modern platforms, rather than maintaining obsolete legacy platforms.” Elsewhere in the document, the organisation said its aim is to be “cloud-native” as much as possible, with members of the Defence community encouraged to take an MODCloud-first approach to procuring services.


Google’s AI chatbot is out to rival ChatGPT

While Bard is still in its early stages of development, Google is confident that the system will be able to compete with ChatGPT and other AI systems in the market. Apart from assisting with search engine capabilities, Bard will bring other features that will assist developers in developing their applications using Google’s language model. “Beyond our own products, we think it’s important to make it easy, safe and scalable for others to benefit from these advances by building on top of our best models,” Pichai wrote. “Next month, we’ll start onboarding individual developers, creators and enterprises so they can try our Generative Language API, initially powered by LaMDA with a range of models to follow. Over time, we intend to create a suite of tools and APIs that will make it easy for others to build more innovative applications with AI.” For other end users, there has been a mixed reaction regarding how AI chatbots will affect the order of things. While some people argue that the advent of these chatbots and their potential integration into search engines will aid the creative and marketing industries, others think otherwise.


Yes, CISOs should be concerned about the types of data spy balloons can intercept

Nation-states will collect intelligence to further their knowledge of rivals and a large part of that intelligence will come from private corporations. The fact that China chose this particular time to do so is indicative of its desire to place the United States in a weakened position ahead of a planned visit to China by US Secretary of State Antony Blinken, if it could. The United States didn’t take the bait and postponed the visit indefinitely and sent a demarche to the government of China. The “sources tell us” snippets from the mainstream media note that the United States purposefully allowed the balloon and its collection platform to continue its mission and to receive navigational commands but jammed the transmission of non-navigational signals. Thus, it is probable that the Chinese tried to issue a destruct command (not unlike those any CISO can do for a lost iPhone) but were unable to do so due to US countermeasures. Regardless of the outcome of that technological duel in the sky, the containers will provide valuable intelligence. 


10 Tips for Developing a Data Governance Strategy

There is no “one” right data governance leader. In some companies, the data governance leader is the chief data officer. In others, it may be the CFO, chief risk officer, or CIO. Historically, the role has resided within the realm of IT. Today, that’s changing. A Forrester study found that 45% of companies make data governance mostly business-focused, while 53% are IT-focused. Forrester advises that data governance is more a business problem and should be anchored in a business context. No matter which office heads up the data governance strategy, the team should be spread throughout the company, incorporating subject-matter or line-of-business experts, data analysts, data scientists, the IT department, and legal counsel. “What we’ve done wrong in the past is taken a role and turned it into a position, versus thinking about how we use data, build insights, and make decisions from our data,” Goetz said. “If you can see how you operate as a culture, you can figure out who should own it in the company.”


Cyber Insurance Costs Lead to Scrutiny of Business Partners

“Many suppliers to large companies often are small businesses that lag behind in their deployment of cybersecurity controls. They can be an easy path for cyber criminals to launch attacks on larger organizations,” she says. “This additional risk needs to be considered when pricing cyber coverage and has an impact on cyber insurance premiums.” She explains that having adequate cybersecurity deployed when interacting with third-party vendors drastically improves the risk profile of any organization. “It also makes it more insurable for cyber, which in return lowers premiums or opens more coverage options,” Dumont adds. This approach by larger businesses ranges, for example, from compliance to security best practices when deploying cloud providers and requiring multi-factor authentication (MFA) for maintenance services when they access the company’s connected equipment. From her perspective, third-party scrutiny on cybersecurity yields positive outcomes for all, starting with the most important benefit, which is to lower the likelihood of facing a cyber incident.


Seven deadly sins of devising a cloud strategy

Relying on a single vendor to implement a cloud strategy is an inflexible approach that leaves enterprises isolated when it comes to maintaining control over the performance of their digital platform. It can mean having little or no say in which services and providers can be adopted while being locked-in to lengthy service agreements, even when prices rise, or when service levels fall off. This is particularly pertinent given the dramatic reduction in the cost of cloud services in recent years. ... Losing track of costs is easily done when implementing a cloud strategy, especially in cases when the scale of the transformation is significant. It’s imperative to identify areas where resources are being mismanaged and then eliminate waste. For example, in a sector such as financial services, which has traditionally been slow to adopt cloud computing, taking a “rightsizing” approach will help identify areas that have not been provisioned correctly. They can then be reconfigured to optimal levels. In practice, this means only purchasing cloud services that a business actually needs and that it will use.


Secure Delivery: Better Workflows for Secure Systems and Pain-Free Delivery

When reviewing architecture at a high level, any security concerns are usually big-ticket items that require considerable effort to retrofit, and sometimes even the redesign of a critical feature of a system like authentication. Lower-level threats and vulnerabilities are often found by outsourcing deeper technical security knowledge from an external penetration testing company, who are engaged to attack the system and highlight any serious issues. After these activities are complete, we usually see a fractious negotiation around risk and resources, with the engineering team pushing back on making expensive, time-consuming changes to their system architecture and operational processes just before their release deadlines, and the system owner pushing for risk acceptance for all but the most serious risks. Overall, security can be seen as something that’s owned by the security team and not an attribute of a system’s quality that’s owned by engineers, like performance or reliability.



Quote for the day:

"Leadership - leadership is about taking responsibility, not making excuses." -- Mitt Romney

Daily Tech Digest - February 09, 2023

The role of the database in edge computing

In a distributed architecture, data storage and processing can occur in multiple tiers: at the central cloud data centers, at cloud-edge locations, and at the client/device tier. In the latter case, the device could be a mobile phone, a desktop system, or custom-embedded hardware. From cloud to client, each tier provides higher guarantees of service availability and responsiveness over the previous tier. Co-locating the database with the application on the device would guarantee the highest level of availability and responsiveness, with no reliance on network connectivity. A key aspect of distributed databases is the ability to keep the data consistent and in sync across these various tiers, subject to network availability. Data sync is not about bulk transfer or duplication of data across these distributed islands. It is the ability to transfer only the relevant subset of data at scale, in a manner that is resilient to network disruptions. For example, in retail, only store-specific data may need to be transferred downstream to store locations.


Data management in the digital age

To ensure effective data management, organisations can adopt various strategies and tactics that have proven their worth in modern organisations. The first of these is a comprehensive risk assessment. Performing risk assessments regularly will ensure that you can identify and prioritise vulnerabilities before they become gaping security holes that can be exploited. Ongoing risk assessments should be bolstered by robust and current security and data management policies that reflect the threat landscape. “You also need to implement employee training and communication because humans are often the weakest link in even the most advanced security system,” says Grimes. “You must ensure that security is understandable and accessible and that the lessons are driven home through constant reminders and training programmes. All it takes is one click to bring down the most sophisticated security system on the planet.” It’s also important to collaborate with vendors and partners that understand the security landscape and have the tools and expertise required to support the organisation’s security posture. 


Coaching IT pros for leadership roles

You can teach someone to code, manage money, and complete the tasks of being a manager. But teaching is limited. To develop a leader, you have to coach them to become someone who can make decisions on their own, communicate well, and plan strategically. But the transition from teacher to coach can be challenging. ... Then practice what Davis calls the “ask first, tell second” method of coaching. “Ask them what’s exciting about this. Then ask what’s scary?” And, since the core skill of coaching is listening, “give them the time and space to answer and listen to what they say,” she says. They might not want to give up the thing they are good at to learn something hard. They might feel jealous of team members who get to keep their hands on the technology. They might fear that others aren’t good enough to do the work they’ve been doing. And they might not yet see the benefits of a leadership role. In the “tell” portion, point out the influence they will have on larger issue in the company, the essential role of managers on the team, the pleasure of helping people grow into larger careers, and how this will give them a seat at the table.


4 characteristics of enterprise application platforms that support digital transformation

The need to deploy applications in various different cloud infrastructures—public cloud, private cloud, physical, virtual, and edge—based on business needs is a key requirement for most established enterprises. As more and more business value is created with the Internet of Things (IoT), edge computing, and artificial intelligence and machine learning (AI/ML), the need to deploy applications across these cloud providers from devices, edge datacenters, on-prem, and colocation providers to the public cloud ecosystem is growing exponentially. For an enterprise, a baseline application platform that can be deployed on all these cloud provider types is essential, if not vital, to support current and future business needs. Another aspect to consider is the growth and distribution of enterprise data. As the famous saying goes, "data is the new oil," and the amount and pace of enterprise data growth are unprecedented. Enterprises are looking at options to leverage this data to create meaningful business insights. 


How to Combine RPA and BPM the Smart Way

Seamless digital integration is more than just cobbling together the best digital solutions on the market. How these advanced technologies interact makes a huge difference. Technologies designed to work together are crucial to achieving the productivity gains promised by digital transformation. With a comprehensive platform, organizations don’t need to worry about building integrations because the platform already includes them. Moreover, a single platform is easier to buy and manage because it comes from the same licensor rather than going through the procurement process with multiple suppliers. Companies need to take care when determining which IA platform to adopt. The benefits of a comprehensive platform are increasingly recognized by vendors and their customers, pushing suppliers to put together multifeatured automation platforms. If companies choose a platform insufficient for their needs, they face reworking costs down the road. Nevertheless, suppose organizations have already taken on technical debt and are looking to rework their digital transformation journey.


5 Technologies Powering Cloud Optimization

Cloud cost management is a critical component of optimization that helps organizations to monitor and manage their cloud spend. The goal is to ensure that organizations are only paying for the cloud resources they actually need and that they are using those resources efficiently. ... Autoscaling is a technology that enables organizations to automatically scale their cloud resources up or down as needed to meet changing demands. The goal of autoscaling is to ensure that organizations always have the right amount of resources to support their workloads while minimizing costs and ensuring that their systems are always available when they are needed. Autoscaling works by monitoring the performance and usage of cloud resources, such as compute instances, storage and network traffic, and automatically adjusting the size of those resources to meet changing demand. ... An API gateway is a server that acts as an intermediary between an application and one or more microservices. The API gateway is responsible for request routing, composition and protocol translation, which enables microservices to communicate with each other securely and efficiently.


Streaming Data Management for the Edge

Managing data at the edge is actually quite easy. What’s hard is how you monetize it. How do you get value from it? How do you take the data that’s streaming into the organization and analyze it, inference on it, and act on it as it’s coming in? How do you use this data to help your customer or stakeholder Think about a retailer who’s trying to do in-store queue management, trying to identify situations where customers are abandoning their carts because the lines are too long, where you’re trying to watch for theft, for shrinkage. It isn’t the management of the data that’s as big a challenge. It is the ability to take that data and make better operational decisions at the point of customer interaction or operational execution. That’s the challenge. And so, we need a different mental frame as well as a different data and analytics architecture that is conducive to the fact that this data that’s coming in, in real time, has value as it’s coming in. Historically, in batch worlds, we didn’t care about real-time data. The data came in. 


DevOps isn’t dead: How platform engineering enables DevOps at scale

Platform engineers could automate almost all this work by building it into an IDP. For example, instead of manually setting up Git repositories, developers can request a repository from the IDP, which would then create it. The IDP would then assign the right user group and automatically integrate the correct CI/CD template. The same pattern applies to creating development environments and deploying core infrastructure. The IDP acts as a self-service platform for developers to request services and apply configurations, knowing security best practices and monitoring are built in by default. IDPs can also automatically set up projects in project tracking software and documentation templates. As you can see, platform engineers don’t replace DevOps processes. They enhance them by building a set of standardized patterns into a self-service internal development platform. This removes the burden of project initialization so teams can start providing business value immediately, rather than spending the first few weeks of a project setting up and working through teething issues.


The Dos and Don‘ts of API Monetization

Before diving into best practices and antipatterns, let’s go over the core technical requirements for enabling API monetization:Advanced metering: Because different customers may have different levels of access to APIs under varying pricing plans, it’s critical to be able to manage access to API requests in a highly granular way, based on factors like total allowed requests per minute, the time of day at which requests can be made and the geographic location where requests originate. Usage tracking: Developers must ensure that API requests can be measured on a customer-by-customer basis. In addition to basic metrics like total numbers of requests, more complex metrics, like request response time, might also be necessary for enforcing payment terms. Invoicing: Ideally, invoicing systems will be tightly integrated with APIs so that customers can be billed automatically. The alternative is to prepare invoices manually based on API usage or request logs, which is not a scalable or efficient approach. Financial analytics: The ability to track and assess the revenue generated by APIs in real time is essential to many businesses that sell APIs. 


How to unleash the power of an effective security engineering team

Security engineering teams should be able to build and operate the services they produce. You build it. You run it. This level of ownership within a group is vital from a technical competence standpoint and culturally, setting the tone around accountability. Technically speaking, a team that can own its services will proficiently manage infrastructure, CI/CD tooling, security tooling, application code, deployments, and the operational telemetry emitted from a service. In addition, the skills backing all that support as a team are likely to be highly transferable in support of other groups across the organization. Teams that understand, embrace, and optimize for DevX are likely more favored. Beyond that, it will have a particular focus on eliminating friction. Friction makes things take longer and cost more, creates longer learning cycles, and can lead to frustration. Less friction will lead to things generally running much smoother. Sometimes friction is necessary and should be intentional. An example is a forced code review on critical code before it's merged. 



Quote for the day:

"Leadership is liberating people to do what is required of them in the most effective and humane way possible." -- Max DePree