Daily Tech Digest - January 16, 2023

Why Cyber Insurance Will Revive Cyber Business Intelligence

Because cyber insurance deals with risk that has been transferred, there is a subtle but powerful distinction from the need to understand your own risk. In many cases, insurance companies that can curate low risk pools and a favorable loss ratio can significantly improve profits. That’s not the only way they make money, but it is one way. Now enter the resurgence of cyber business intelligence. While concepts like cyber threat intelligence and risk assessments focus on preventing loss, cyber business intelligence aligns with concepts already utilized elsewhere in a business environment. “What pieces of knowledge and trends can I follow – that by following them I can be more profitable?” This is a different mindset. This is one anchored on the idea that “you’ve got to spend money to make money.” This drives a culture and enthusiasm that can foster better innovation, better results and faster progress. There’s another key word there. Business. Not only relevant to technical experts, this information is equally relevant to business leaders and key decision makers. 


No Black Boxes: Keep Humans Involved In Artificial Intelligence

Not all AI needs are created equal. For instance, in low-stakes situations, such as image recognition for noncritical needs, it’s not likely necessary to understand how the programs are working. However, it is critical to understand how code operates and continues to develop in situations with important outcomes, including medical decisions, hiring decisions, or car safety decisions. It’s important to know where human intervention is needed and when it’s necessary for input and intervention. Additionally, because educated men mainly write AI code, according to (fittingly) the Alan Turing Institute, there’s a natural bias to reflect the experiences and worldviews of those coders. Ideally, coding situations in which the end goal implicates vital interests need to focus on “explainability” and clear points where the coder can intervene and either take control or adjust the program to ensure ethical and desirable end performance. Further, those developing the programs—and those reviewing them—need to ensure the source inputs aren’t biased toward certain populations.


3 Things New Engineering Managers Should Focus On

A high-performing team consist of engaged, happy, and motivated people — truly getting the best out of your team means getting the best from the individual. So what does that mean for you? It means quickly getting up to speed on each team member’s background, experiences, portfolio, strengths, growth areas, and goals. How do they want to be recognized? What style of feedback do they prefer? How do they learn best? What goals do they have? The more nuance you can learn about each person, the more successful you will be in leading them. By setting up 1:1 meetings, you’ll be able to learn about each person on your team, coach them, and discuss their progress towards goals. ... Instead of rolling in making changes, spend this time learning about the processes your team is already using. What are the team’s goals? How do they work together and separately? How does your team integrate with other teams — or not, currently, and is that an issue? Who are the customers and partners? 


Post-quantum cybersecurity threats loom large

Considering this net-positive shift in budgets, it’s no surprise that 74% of enterprise leaders have adopted or are planning to adopt quantum computing. Interestingly, nearly 30% of respondents that have adopted or plan to adopt quantum computing expect to see a competitive advantage due to quantum computing within the next 12 months. This represents more than a sevenfold increase year-over-year from 2021 (4%) and highlights the growing commitment to near-term quantum computing initiatives as the technology continues to mature. “We’re getting a unique glimpse into the quantum adoption mindset of global enterprise executives, which mirrors what we’re seeing in our customer base,” said Christopher Savoie, CEO of Zapata Computing. “These findings become more interesting when compared to the data we saw last year. Over the past 12 months, we’ve seen significant new developments in technology, particularly generative AI, and near-term advantages from quantum-inspired technologies that are fueling the momentum for quantum computing planning and adoption.


Data will be the king in 2023!

The importance of cyber-risk governance is no longer limited to CISOs anymore; conversations are deepening on how organizations can ensure data resiliency, adaptability, and security at the C-Suite level. As we approach 2023, business leaders will need to assess their data infrastructure with a five-point focus approach — scalability, flexibility, agility, security, and cost. Data protection and management will become a top-tier priority for business leaders. Significant amounts of the IT budget spend will be allocated and invested in technologies to prevent, detect, and recover from inevitable cyberattacks not if, but when they occur. A study by PWC stated that 62% of respondents expect their security budget to increase by as much as 10% in 2023. As cloud investments will continue to soar high in 2023, the parallel shifts in the threat landscape will also become more sophisticated. As per the recent Commvault-IDC survey, over 28% of Indian enterprises stated they will have multiple private and/or public cloud environments and migrate workloads and data between them by 2023. Thus, protection and data recoverability will be essential components in the enterprise security toolbox of organizations.


What to expect from SASE certifications

Compared to other networking certifications, like the CCNA, which is more about how to operate the technology, Cato’s SASE and SSE certifications are high-level overviews. “Our certification is more about what SASE and SSE mean, what are the implications, and what it means to different IT teams,” says Webber-Zvik. “You see presentations, whiteboards, reading materials, and at the end of each section, there is a quiz. When you complete all the sets and pass all the tests, you get the certification.” The majority of the material covered is not Cato-specific, he says. However, the certification does use Cato’s implementation of SASE and SSE in its examples. Take, for instance, single-pass processing. According to Gartner, this is a key characteristic of SASE, and it means that networking and security are integrated. “We explain it according to Gartner’s definition,” Webber-Zvik says. “We also provide an example of Cato’s implementation and use that to articulate what single-pass processing can look like when it’s outside Gartner theory and in real life.” There is no charge for Cato’s certification training and exam, but that might change, he says.


How to Overcome Challenges in an API-Centric Architecture

There are several potential solutions. If the use case allows it, the best option is to make tasks asynchronous. If you are calling multiple services, it inevitably takes too long, and often it is better to set the right expectations by promising to provide the results when ready rather than forcing the end user to wait for the request. When service calls do not have side effects (such as search), there is a second option: latency hedging, where we start a second call when the wait time exceeds the 80th percentile and respond when one of them has returned. This can help control the long tail. The third option is to try to complete as much work as possible in parallel by not waiting for a response when we are doing a service call and parallelly starting as many service calls as possible. This is not always possible because some service calls might depend on the results of earlier service calls. However, coding to call multiple services in parallel and collecting the results and combining them is much more complex than doing them one after the other.


The CIO’s role is changing – here’s why

Faced with an increasing number of threats both internal and external, CIOs have had to prioritise areas such as cyber security in recent years just to keep their businesses protected. In doing so, they’ve also been charged with embracing the latest technological developments such as artificial intelligence, big data analytics, and the plethora of connected devices that comprise the burgeoning Internet of Things; technologies that will foster greater innovation and provide their businesses with a more competitive edge. Increasingly, however, it won’t necessarily be an organisation’s IT department that drives the adoption of emerging technologies. More often, other areas of the business will now be in a better position to identify the innovative technology that will deliver greater customer value, and the specific use cases in which it can be implemented. 77 per cent of CIOs surveyed by Gartner claimed that IT staff are primarily providing innovation and collaboration capabilities, compared with 18 per cent stating that non-IT personnel are providing these tools. 


SRE in 2023: 5 exciting predictions

Whether it’s AI assistance, VR immersion, or web3 decentralization, 2023 will continue to push organizations to adopt cutting-edge technology. It’s a challenge to guess which of these ideas will flourish and which will flounder, but either way, having a reliable foundation will be necessary. Adopting even the most successful new ideas at scale will bring new obstacles and types of incidents. These growing pains of new technologies will require new approaches. As organizations experience these growing pains, they’ll turn to SRE to keep their customers happy while they adjust. Incident retrospectives can help teams handle new sources of incidents quickly, while a reliability mindset can keep customer happiness the number one priority. Reliability is the subjective experience of users based on their expectations of the service. While this is a helpful way to align priorities with customer needs, 2023 will bring an even more holistic definition of reliability. Organizations will start thinking about the reliability of their system, not just in terms of their users’ experiences, but as a complete package covering everything starting from development ideation.


10 data security enhancements to consider as your employees return to the office

The unauthorized disclosure of data isn’t always the result of malicious actors. Often, data is accidentally overshared or lost by employees. Keep your employees informed with cyber security education. Employees who go through regular phishing tests may be less likely to engage with malicious actors over email or text messaging. ... An inventory of software, hardware and data assets is essential. Having control over the assets with access to your corporate environment starts with an inventory. Inventories can be a part of the overall vulnerability management program to keep all assets up to date, including operating systems and software. Furthermore, a data inventory or catalogue identifies sensitive data, which allows appropriate security controls like encryption, access restrictions and monitoring to be placed on the most important data. ... Reducing your overall data footprint can be an effective way of reducing risk. 



Quote for the day:

"Smart leaders develop people who develop others, don't waste your time on those who won't help themselves." -- John C Maxwell

Daily Tech Digest - January 15, 2023

How confidential computing will shape the next phase of cybersecurity

At its core, confidential computing encrypts data at the hardware level. It’s a way of “protecting data and applications by running them in a secure, trusted environment,” explains Noam Dror—SVP of solution engineering at HUB Security, a Tel Aviv, Israel-based cybersecurity company that specializes in confidential computing. In other words, confidential computing is like running your data and code in an isolated, secure black box, known as an “enclave” or trusted execution environment (TEE), that’s inaccessible to unauthorized systems. The enclave also encrypts all the data inside, allowing you to process your data even when hackers breach your infrastructure. Encryption makes the information invisible to human users, cloud providers, and other computer resources. Encryption is the best way to secure data in the cloud, says Kurt Rohloff, cofounder and CTO at Duality, a cybersecurity firm based in New Jersey. Confidential computing, he says, allows multiple sources to analyze and upload data to shared environments, such as a commercial third-party cloud environment, without worrying about data leakage.


Not All Multi-Factor Authentication Is Created Equal

Many legacy MFA platforms rely on easily phishable factors like passwords, push notifications, one-time codes, or magic links delivered via email or SMS. In addition to the complicated and often frustrating user experience they create, phishable factors such as these open organizations up to cyber threats. Through social engineering attacks, employees can be easily manipulated into providing these authentication factors to a cyber criminal. And by relying on these factors, the burden to protect digital identities lies squarely on the end user, meaning organizations’ cybersecurity strategies can hinge entirely on a moment of human error. Beyond social engineering, man-in-the middle attacks and readily available toolkits make bypassing existing MFA a trivial exercise. Where there is a password and other weak and phishable factors, there is an attack vector for hackers, leaving organizations to suffer the consequences of account takeovers, ransomware attacks, data leakage, and more. A phishing-resistant MFA solution completely removes these factors, making it impossible for an end user to be tricked into handing them over even by accident or collected by automated phishing tactics.


Europe’s cyber security strategy must be clear about open source

While the UK government has tried to recognise the importance of digital supply chain security, current policy doesn’t consider open source as part of that supply chain. Instead, regulation or proposed policies focus only on third-party software vendors in the traditional sense but fail to recognise the building blocks of all software today and the supply chain behind it. To hammer the point, the UK’s 11,000+ word National Cyber Security Strategy does not include a single reference to open source. GCHQ guidance meanwhile remains limited, with little detailed direction beyond ‘pull together a list of your software’s open source components or ask your suppliers.’ ... In this sense, the EU has certainly been listening. The recently released Cyber Resilience Act (CRA) is its proposed regulation to combat threats affecting any digital entity and ‘bolster cyber security rules to ensure more secure hardware and software products’. First, the encouraging bits: the CRA doesn’t just call for vendors and producers of software to have (among other things) a Software Bill of Materials (SBoM) - it demands companies have the ability to recall components. 


Eight Common Data Strategy Pitfalls

Lack of data culture: Data hidden within silos with little communication between business units leads to a lack of data culture. Data Literacy and enterprise-wide data training is required to allow business staff to read, analyze, and discuss data. Data culture is the starting point for developing an effective Data Strategy.The Data Strategy is too focused on data and not on the business side of things: When businesses focus too much on just data, the Data Strategy may just end up serving the needs of analytics without any focus on business needs. An ideal Data Strategy enlists human capabilities and provides opportunities for training staff to carry out the strategy to meet business goals. This approach will work better if citizen data scientists are included in strategy teams to bridge the gap between the data scientist and the business analyst.Investing in data technology before democratizing data: In many cases, Data Strategy initiatives focus on quick investment in technology without first addressing data access issues. If data access is not considered first, costly technology investments will go to waste. 


Here's Why Your Data Science Project Failed (and How to Succeed Next Time)

Every data science project needs to start with an evaluation of your primary goals. What opportunities are there to improve your core competency? Are there any specific questions you have about your products, services, customers, or operations? And is there a small and easy proof of concept you can launch to gain traction and master the technology? The above use case from GE is a prime example of having a clear goal in mind. The multinational company was in the middle of restructuring, re-emphasizing its focus on aero engines and power equipment. With the goal of reducing their six- to 12-month design process, they decided to pursue a machine learning project capable of increasing the efficiency of product design within their core verticals. As a result, this project promises to decrease design time and budget allocated for R&D. Organizations that embody GE's strategy will face fewer false starts with their data science projects. For those that are still unsure about how to adapt data-driven thinking to their business, an outsourced partner can simplify the selection process and optimize your outcomes.


5 Skills That Make a Successful Data Manager

The role of a data manager in an organization is tricky. This person is often neither an IT guy who implements databases on his/her own, nor a business guy who is actually responsible for data or processes (that’s rather a Data Steward’s area of responsibility). So what’s the real value-add of a data manager (or even a data management department)? In my opinion, you need someone who is building bridges between the different data stakeholders on a methodical level. It’s rather easy to find people who consider themselves as experts for a particular business area, data analysis method or IT tool, but it is rather complicated to find one person who is willing to connect all these people and to organize their competencies as it is often required in data projects. So what I am referring to are skills like networking, project management, stakeholder management and change management HIwhich are required to build a data community step-by-step as backbone for Data Governance. Without people, a data manager will fail! So in my opinion, a recruiter who seeks for data managers should not only challenge technical skills but also these people skills.


Why distributed ledger technology needs to scale back its ambition

There is nonetheless an expectation that DLT can prove to be a net good for financial markets. Foreign exchange markets have an estimated $8.9 trillion at risk every day due to the final settlement of transactions between two parties taking days. This is why the Financial Stability Board and the Committee on Payments and Market Infrastructures have focused their efforts on enhancing cross-border payments with a comprehensive global roadmap. Part of this roadmap includes exploring the use of DLT and Central Bank Digital Currencies. The problem may not be the technology itself, but the aim of replacing current technology systems with distributed networks. DLT networks are being designed to completely overhaul and replace legacy technology that financial markets depend on today. Many pilot projects, such as mBridge and Jura, rely on a single blockchain developed by a single vendor. This introduces a single point of trust, and removes many of the benefits of disintermediation. 


Why is “information architecture” at the centre of the design process?

The information architecture within a design (both process and output) makes the balancing within the equation possible. It also ensures the equation is “solvable” by other people. It does this by introducing logical coherence. It ensures words, images, shapes and colours are used consistently. And it ensures that as we move from idea to execution, we stay true to the original intent — and can clearly articulate it — so that we can meaningfully measure the effectiveness of our design. Without this internal coherence and confidence that our output is an accurate, reliable test of our hypothesis, we’re not doing design. The power of design which has a consistent information architecture is that if we find that our idea (which we translate to intent, experiments and experiences) is not equal to the problem, we can interrogate every part of the equation. We may have made a mistake in execution. Maybe our idea wasn’t quite right. Or even more powerfully, maybe we didn’t really understand the problem fully. 


Improve Your Software Quality with a Strong Digital Immune System

You can improve your software quality with a strong digital immune system since a digital immune system is designed to guard against cyberattacks and other sorts of hostile activities on computer systems, networks, and hardware. It operates by constantly scanning the network and systems for indications of prospective threats and then taking the necessary precautions to thwart or lessen such dangers. This can entail detecting and preventing malicious communications, identifying and containing compromised devices, and patching security holes. A robust digital immune system should offer powerful and efficient protection against cyber threats and assist individuals and companies in staying secure online. Experts in software engineering are searching for fresh methods and strategies to reduce risks and maximize commercial impact. The idea of “digital immunity” offers a direction. It consists of a collection of techniques and tools for creating robust software programmes that provide top-notch user experiences. With the help of this roadmap, software engineering teams may identify and address a wide range of problems, including functional faults, security flaws, and inconsistent data.


Security Bugs Are Fundamentally Different Than Quality Bugs

For each one of the types of testing listed above, a different skillset is required. All of them require patience, attention to detail, basic technical skills, and the ability to document what you have found in a way that the software developers will understand and be able to fix the issue(s). That is where the similarities end. Each one of these types of testing requires different experience, knowledge, and tools, often meaning you need to hire different resources to perform the different tasks. Also, we can’t concentrate on everything at once and still do a great job at each one of them. Although theoretically you could find one person who is both skilled and experienced in all of these areas, it is rare, and that person would likely be costly to employ as a full-time resource. This is one reason that people hired for general software testing are not often also tasked with security testing. Another reason is that people who have the experience and skills to perform thorough and complete security testing are currently a rarity. 



Quote for the day:

"Leadership is particularly necessary to ensure ready acceptance of the unfamiliar and that which is contrary to tradition." -- Cyril Falls

Daily Tech Digest - January 14, 2023

How to build the most impactful engineering team without adding more people

Teams celebrate a 10% improvement in efficiency when they should be looking for a 10x improvement in efficiency. Identify key moments in your product lifecycle when it makes sense to step back and identify the substantial changes that can supercharge productivity. My company builds connectors into a huge variety of data sources. At one time, we were writing 5,000 lines of code to create a single connector, which was not sustainable. Now, a single engineer can build a connector in a week with 100 lines of code. We achieved this by designing a new development framework that allows us to exploit commonalities across the connectors we build and by greatly reducing dependencies among engineers. As soon as one engineer needs input from six other engineers to complete a task, productivity takes a massive hit. Here's a thought experiment you can run to help find your own 10x improvement: Imagine your workload scales 10x overnight, and you absolutely must meet this increase without hiring more engineers or working additional hours. How do you do it? An out-of-the-box thought exercise like this can help you radically improve your approach.


Your project is unique, so why make it replicable?

While replicability isn’t as important as delivery in a modern environment, where software is often unique to the organisation, it is important to be able to prove effectiveness. At Catapult, we use an upskilling system that we call the Lighthouse Model; whereby we identify a team from the ground-up that can act as a model for the rest of the business and focus first on developing them as a group. By demonstrating the effectiveness of agile as a foundation on which to build software, a Lighthouse team creates a fertile environment, which removes blocks and gathers data to help develop buy-in across the board. All this works. In 2018, the Standish Group established that ‘Agile projects’ are twice as likely to succeed than waterfall projects. In the same study the company notes that 28 per cent of Waterfall projects fail, while only eleven per cent of agile projects meet the same fate. In this context, the metrics of success went beyond whether the project was on time and on budget and considered its outcomes and impact. They looked beyond the delivery against the plan to include the value delivered and customer satisfaction. In essence, they looked for the real meaning of success.


A New Definition of Reliability

The first thing you might assume is that reliability is synonymous with availability. After all, if a service is up 99% of the time, that means a user can rely on it 99% of the time, right? Obviously, this isn’t the whole story, but it’s worth exploring why. For starters, these simple system health metrics aren’t really so “simple.” Starting with just the Four Golden Signals, you’ll end up with the latency, resource saturation, error rate, and uptime of all your different services. For a complex product, this adds up to a whole lot of numbers. How do you combine and weigh all these metrics? Which are the important ones to watch and prioritize? Judging things like errors and availability can be difficult too. Gray failure, or when a service isn’t working completely but hasn’t totally failed either, can be hard to capture with quantitative metrics. When do you decide when a service is “available enough?” What about a situation where your service performs exactly as intended, but doesn’t align with your customers’ expectations? How do you capture these in your picture of system health? Clearly, there needs to be another layer to this definition of reliability!


Architecture Pitfalls: Don’t use your ORM entities for everything — embrace the SQL!

I suspect one of the greatest lies ever told in web application development is that if you use an ORM you can avoid writing and understanding SQL, “it’s just an implementation detail”. That might be true at first, but once you go beyond the basics that falls away quickly. ... It’s much better to let the database do this kind of filtering. After all, it’s what all of the clever folk who work on databases spend a lot of time and effort optimising. For most ORMs you have the option of writing analogues to SQL which can get you quite a long way. For example, JPA has JPQL and Hibernate has HQL. These let you build abstracted queries that should work on all databases that your ORM supports. The implication of this is that your team needs to embrace SQL and understand how to use it, rather than avoiding it by using application code instead. To dispel a common source of anxiety on this: you don’t need to be a SQL guru to get started and become familiar with what you will need for the vast majority of your implementation requirements. There are also excellent resources and books available, I will link some below. 


How To Build A Network Of Security Champions In Your Organization

An SCP enlists employees from all different disciplines across a company (HR, marketing, finance, etc.) for focused cybersecurity training and guidance. These security champions then become the contact point and voice for cybersecurity within their various departments or offices alongside their main role. They help to advise on, embed and reinforce good security practices with their colleagues. This makes security advice more relatable and accessible, avoiding the “us versus them” attitude that can sometimes exist between employees and traditional enterprise security teams. It’s easier for a colleague to explain a security risk or issue to a co-worker than it is for a security pro whom the co-worker has never met. The security champion’s role is a little like that of a department’s fire marshal. In the same way that the marshal doesn’t need to be a specialist in firefighting, the security champion doesn’t need to be an IT or infosec pro; they just need to know how their colleagues work, what the security risks are within their department or team and the common-sense steps to take to mitigate those risks. 


Companies warned to step up cyber security to become ‘insurable’

Carolina Klint, risk management leader for continental Europe for insurance broker Marsh, and one of the contributors to the report said that insurance companies were now coming out and saying that “cyber risk is systemic and uninsurable”. That means, in future, companies may not be able to find cover for risks such as ransomware, malware or hacking attacks. “It’s up to the insurance industry and to the capital markets whether or not they find the risk palatable,” she said in an interview with Computer Weekly, “but that is the direction it is moving in.” In recent days, cyber attacks have disrupted the international delivery services of the Royal Mail and infected IT systems at the Guardian newspaper with ransomware. The Global risks report rates cyber warfare and economic conflict as more serious threats to stability than the risks of military confrontation. “There is a real risk that cyber attacks may be targeted at critical infrastructure, health care and public institutions,” said Klint. “And that would have dramatic ramifications in terms of stability.”


6 Roles That Can Easily Transition to a Cybersecurity Team

Software engineers possess various technical skills, including coding and software development. They also understand the complexities involved in developing a secure application. This makes them well-suited for different types of cybersecurity tasks. ... They should also be familiar with various cyber threats, such as malware and phishing. Additionally, since software development is constantly evolving, software engineers should be prepared to keep up with the latest trends to remain competitive. ... Network architects possess a strong knowledge of networking technologies and are proficient in setting up secure networks. While not all security roles require a deep technical understanding, network architects are well-suited to design secure networks and implement protection measures. They can also review existing systems for vulnerabilities and recommend solutions to mitigate risks. ... They should also be familiar with emerging technologies and techniques related to cybersecurity, such as artificial intelligence (AI) and machine learning (ML). Another important skill for network architects is identifying and differentiating between legitimate and malicious traffic signals.


Getting started with data science and machine learning: what architects need to know

In almost every scientific field, the role of the data scientist is actually played by a physicist, chemist, psychologist, mathematician (for numerical experiments), or some other domain expert. They have a deep understanding of their field and pick up the necessary techniques to analyze their data. They have a set of questions they want to ask and know how to interpret the results of their models and experiments. With the increasing popularity of industrial data science and the rise of dedicated data science educational programs, a typical data scientist's training lacks domain-specific training. ... There are two opposing approaches. One is to know which tool to use, pick up a pre-implemented version online, and apply it to a problem. This is a very reasonable approach for most practical problems. The other is to deeply understand how and why something works. This approach takes much more time but offers the advantage of modifying or extending the tool to make it more powerful.


ZeroOps Helps Developers Manage Operational Complexity

The first thing to take into account when implementing ZeroOps for your business: You must consider everything that isn’t directly driving value. Who should be doing those tasks? You want your core staff to be focused on the business, so it’s worth considering a managed service provider as a partner. This can help provide your team with the skills and support they need, while allowing them to focus on their core competencies. The right tools can help your team be more productive than you ever imagined, without hiring new full-time employees. ... More agile, with less pressure and responsibility to handle “the little things” that we know aren’t so little. Imagine how your team members could shine when supported by experts to assist them so they can focus on providing value. Imagine being able to deliver projects much more quickly so delivery expectations actually aligned with what was realistic. ... Managed services can help make your team more productive and capitalize on their talent. When you struggle with a problem, it’s likely that your managed service provider has already solved it for others so you don’t have to reinvent the wheel.


Dark Web Monitoring For Law Firms: Is It Worthwhile?

One real value for a dark web scan is awareness. You should be able to obtain an initial dark web scan free of charge – without paying an ongoing monthly monitoring fee, which we certainly don’t recommend. The initial report will help identify if you have law firm employees that tend to reuse the same password across multiple sites. It may even identify sites you were not aware of so that you can immediately change the password. Use the dark web scan to educate employees at your next cybersecurity awareness training session. If you’re not teaching your employees about cybersecurity, at least annually, you are missing a very significant part of cyber resilience! A human element is involved in data breaches 82% of the time. Take control of your data and don’t hand it over to a monitoring service. You should be using a password manager and a unique password for each website or application you use. Put a freeze on your credit file at the three major credit bureaus. Freezing your credit file is free. 



Quote for the day:

"The test we must set for ourselves is not to march alone but to march in such a way that others will wish to join us." -- Hubert Humphrey

Daily Tech Digest - January 13, 2023

Poor cloud architecture and operations are killing cloud ROI

If the cloud did not ever have the potential to return ROI back to the business, nobody would use it. However, there are businesses that are very successful with cloud, even changing the business around the use of cloud computing. These companies are leveraging cloud as a true force multiplier to build innovative solutions, as well as to provide agility and scalability. However, many cannot find business value with cloud computing. Most disturbing, they are not finding value while spending about the same amount of money as those who are finding value. We must therefore conclude that bad decisions are being made. Cloud computing technology has been relevant for about 15 years. We understand it’s what you do and your company culture that makes you truly successful with cloud computing, not what you spend. Why are we still seeing winners and losers? ... First, bad architectures need to be fixed before they can operate properly. You can have a disciplined and highly automated operations team and technology stack, but if the solution is poorly designed, the result is going to be less than stellar, no matter what.


Innovation: Your solution for weathering uncertainty

Innovation has always been essential to long-term value creation and resilience because it creates countercyclical and noncyclical revenue streams. Paradoxically, making big innovation bets may now be safer than investing in incremental changes. Our long-standing research shows that innovation success rests on the mastery of eight essential practices. Five of these practices are particularly important today: resetting the aspiration based on the viability of current businesses, choosing the right portfolio of initiatives, discovering ways to differentiate value propositions and move into adjacencies, evolving business models, and extending efforts to include external partners. ... In times of disruption or deep uncertainty, companies have to carefully balance short-term innovations aimed at cost reductions and potential breakthrough bets. As customers’ demands change, overindexing on small product tweaks (that address needs which may be temporary) is unlikely to boost long-term performance. However, “renovations” to designs and processes can produce savings that help fund longer-term investments in innovations that may create routes to profitable growth.


The Truth About Cybersecurity Challenges Facing the Healthcare Industry

In general, healthcare IT has accrued technical debt for more than 25 years. Everywhere you look, whether it’s at the doctor’s office, hospital, or an urgent care facility, you see disparate and often dated IT systems. It’s not as rare as you’d think to see WindowsXP–based computers at the check-in desk and throughout the facility. Many of the most common pieces of equipment and attached computer systems run outdated operating systems, unpatched and archaic software, and have little security on them. I promise you it’s not for lack of trying by the IT and cyber-security team. So much outdated software exists largely because the vendors that support these systems focus on the healthcare aspect, rather than upkeep and security. In other instances, some devices were never intended to be connected to a network — thus rendering them vulnerable to remote attacks because they aren’t configured to be protected from network-based attackers. Finally, there is certainly some “if it ain’t broke, don’t fix it” mentality. Walking around you’ll find computer systems under people’s desks that have served a single purpose for a very long time. 


Time to Look at the Role of the CISO Differently

It is time to stop searching for non-existent profiles, expecting the CISO to be credible one day in front of the Board, the next in front of hackers, the third in front of developers, and all the way across the depth and breadth of the enterprise and its supply chain. Those profiles don’t exist anymore, given the transversal complexity cyber security has developed over the past two decades. The role of the CISO has to be one of a leader, structuring, organising, delegating and orchestrating work across their team and across the firm — and across the multiple third-parties involved in delivering or supporting the business. In essence, knowing what to do is reasonably well established and cyber security good practice — at large — still protects from most threats, and still ensures a degree of compliance with most regulations. But by focusing excessively on purely technical approaches to cyber security challenges, large organizations have failed to protect themselves effectively and efficiently, in spite of massive investments in that space over the last two decades.


MACH as an Enterprise Architecture strategy

MACH is an acronym for Microservices, API-first, Cloud-native, and Headless. It’s a modern approach for building and deploying software applications that can help organizations to be more agile, scalable, and flexible. In a MACH architecture, software applications are built as a collection of independent, self-contained microservices that communicate with each other through APIs (Application Programming Interfaces). The front-ends and back-ends components are separated and the entire solution is designed to be deployed in the cloud. ... There are several benefits of using a MACH architecture for building and deploying software applications:Agile development: MACH architectures allow different parts of an application to be developed and deployed independently, which can make it easier to make changes and updates without disrupting the entire system. This can help organizations be more agile and responsive to changing business needs. Scalability: MACH architectures are designed to be deployed in a cloud computing environment, which can provide the scalability and flexibility needed to support rapid growth or spikes in demand.


Maximizing data value while keeping it secure

Many organizations stumble and fail because they lack complete visibility into all data assets in clouds and beyond. To take visibility to a higher level, it’s vital to have a catalog of all managed and shadow assets, along with their owners, locations, security and governance measures enabled for the data. Without a central repository and a single view, there’s no way to know what data exists, how it’s stored, where it’s used and how it’s shared. Essentially, an organization winds up flying blind. Yet the advantages of robust discovery and visibility don’t stop there. With this information it’s possible to adapt and expand security profiles as needs and conditions change. ... Sharing data in the cloud involves complexity and risk. That’s a given. To maximize the opportunity—including harnessing the full functionality of cloud-native tools—an organization must know who is accessing data and how they are using it. Therefore, a robust identity management framework is crucial. Administrators and others must be able to analyze roles and permission settings in data assets that reside in clouds and across multi-cloud frameworks. 


Top automation pitfalls and how to avoid them

Automating a bad process can make things worse as it can magnify or exacerbate underlying issues, especially if humans are taken out of the loop. In some cases, a process is automated because the technology is there, even if automation isn’t required. For example, if a process occurs very rarely, or there’s a great deal of variation in the process, then the cost of setting up the automation, teaching it to handle every use case, and training employees how to use it may be more expensive and time-consuming than the old manual approach. And putting the entire decision into the hands of data scientists, who may be far removed from the actual work, can easily send a company down a dead end, or to end users who might not know how automation works, says James Matcher, intelligent automation leader at Ernst & Young. That recently happened at a company he worked with, a retail store chain with locations around the US. The retailer approached people on the front lines, and employees and managers working on the shop floors, for suggestions about manual processes that should be automated.


What’s the role of the CTO in digital transformation?

A CTO needs to take on the role of the ‘bridge builder’ between the strictly technical components of a transformation strategy and how they can apply to people and process in the specific context of an organisation. Digital transformation is a team activity. Each role needs to bring to the process their full insights and experience for the CTO to manage. The CTO has specific technological insight and therefore needs to be directly involved in helping the entire organisation identify where technical systems are simply obsolete and not fit for purpose so as well as being a bridge builder, CTOs naturally lead the charge when dealing with a technology-led approach. They must be able to explain where the value is in the application of technological change in context – too often we see visions that are de-contextualised from the reality on the ground. This kind of technological planning does not allow for realistic strategic planning. With visions of the ambitious but feasible in sight it is then the whole leadership team’s task to decide what course they are going to map out and to work together on the digital transformation journey.


How Organizations Should Respond to the CircleCI Security Incident

CircleCI has taken proactive steps to mitigate risk for its customers, but simply revoking secrets from the platform is not enough, according to Jaime Blasco, co-founder and CTO of cybersecurity company Nudge Security. “It’s still important to assume that every connected application and secret has been compromised. Organizations should verify the steps that these vendors have taken and also take steps to rotate secrets within any other connected application,” he explains. Customers can leverage commercially available or open-source tools, aside from the one offered by CircleCI, to discover their secrets. “One option is to use Trufflehog, an open-source tool that scans for secrets across multiple platforms, including CircleCI, Github, Gitlab, and AWS S3,” says Blasco. CircleCI is assuming responsibility and taking steps to protect its customers, Assaf Morag, lead data analyst at cloud native security company Aqua Security, notes. But is important for customers to respond proactively to the security incident as well. 


Artificial intelligence in strategy

Every business probably has some opportunity to use AI more than it does today. The first thing to look at is the availability of data. Do you have performance data that can be organized in a systematic way? Companies that have deep data on their portfolios down to business line, SKU, inventory, and raw ingredients have the biggest opportunities to use machines to gain granular insights that humans could not. Companies whose strategies rely on a few big decisions with limited data would get less from AI. Likewise, those facing a lot of volatility and vulnerability to external events would benefit less than companies with controlled and systematic portfolios, although they could deploy AI to better predict those external events and identify what they can and cannot control. Third, the velocity of decisions matters. Most companies develop strategies every three to five years, which then become annual budgets. If you think about strategy in that way, the role of AI is relatively limited other than potentially accelerating analyses that are inputs into the strategy. 



Quote for the day:

"Effective questioning brings insight, which fuels curiosity, which cultivates wisdom." -- Chip Bell

Daily Tech Digest - January 12, 2023

Agritech forces gain ground across Africa

One of the crucial issues that agriculture in Africa is currently solving, according to Gaddas, is a lack of water. He says that in Senegal, Tunisia and many other countries, companies are working hard on intelligent irrigation, and on how to optimize water resources that are becoming increasingly scarce, especially in the context of climate change and unpredictable rainfall. “Managing water is becoming crucial,” he says. “We’ve met start-ups that use drones, which, through their precision devices, help to collect data that can be used by farmers, such as the levels of nitrogen from the fields, precise mapping of areas with fertiliser deficits,and others that solve plant disease problems by making diagnoses. There are also ERP systems for farm management and to know what is happening in real time—the management of inputs, fertilizers and more.” He also appreciated the digital aquaculture companies that allow for very rational management of aquaculture farms, while praising the impressive diversity of solutions. “The diversity of problems that farmers face in Africa is very wide but creativity is not the weak point of Africans,” he says. 


Ushering in an era of pervasive intelligence, powered by 6G

The impact that this new era will have cannot be understated. It will power economies, drive sector convergence, enable the distributed infrastructure behind Web 3.0 and scale and interconnect metaverses. Put simply, it will transform all aspects of life. But getting there isn’t straightforward, and we need to act now to lay the foundations that are necessary if we are to harness its power. This on its own is not the most straightforward undertaking, as is evidenced by the issues with the 5G+ rollout and adoption. The right infrastructure and business models were not in place, which led to delays and innovative potential left on the table. Let’s learn from past mistakes, course correct and ensure we’re ready for the future of pervasive intelligence. ... Transformation into the pervasive intelligence era will first require the establishment of a high performance, integrated ecosystem made up of a range of partners from different industries and sectors. This is critical as pervasive intelligence will only be reached in an environment where data and information can move freely and securely. This, however, cannot happen if companies operate in silos or in isolation.


DeFi Labs Revolutionises Decentralized Finance by Leveraging AI

According to the co-founder of DeFiLabs, “With our AI-powered yield farm, we’re introducing a new level of innovation to the DeFi space. We’re making it possible for users of all levels to earn high returns on their investments, while also minimizing risk. Our goal is to provide our users with the best investment opportunities available in the DeFi space, and our AI-powered yield farm is just the beginning. We’re excited to see how our users will benefit from this new offering.” This launch is also a significant step for the Binance Smart Chain ecosystem, as it showcases the capabilities and the potential for growth of Binance Smart Chain. This yield farm will encourage the usage of the Binance Smart Chain and drive the adoption of DeFi on this network. The yield farm is live and fully operational, and users can start staking their Binance Coin (BNB) or other supported tokens to earn high returns on their investments. The DeFiLabs team is constantly working to add new features, tokens, and investment options to the yield farm, making it even more valuable for users.


The importance of collaboration in maximising cybersecurity

The CISO has a vital role within companies, and one which is currently evolving. Beyond technical knowledge, one of the most important aspects of the CISO’s role in an enterprise is collaboration. Information, security and data protection controls permeates all levels and departments of a company, not just limited to tech. As such, it is important to relay technical information succinctly to all relevant directors and parties, ensuring all teams are adequately equipped to manage cyber risks. There is a wide range of cybersecurity services that can be adopted. This includes perimeter and cloud security, device security, network security, threat hunting, DevSecOps, and web and mobile application security. To make them all function, and operate as tightly as possible, you must work with a team of experts, to ensure that your company is at the forefront of new advances in cybersecurity. The removal of silos is therefore integral to ensuring companies are prepared and equipped to defend themselves against cyber-attacks.


IT supply issues have organizations shifting from just-in-time to just-in-case buying

One thing more enterprises should be looking for is greater visibility from their suppliers. "A lot of people are realizing that we're living in a more transparent world now," said Genpact's Waite. And integration between companies has increased, with some providers offering more information to their customers. ... With this approach, vendors are selected not just based on technical fit, form, and function but also based on where in the world they source their materials, or how big of a company they are. Supply chain visibility is particularly important for manufacturers. They need to know if the supplies they need are on track, or if alternate sources have to be found in order to avoid production delays. "Our supply chain is built entirely on transparency," says Carl Nothnagel, COO at specialty hardware manufacturer MBX Systems. "With every supplier, we push for that information. Sometimes we don’t get it, and we’re left with projecting, or guessing as best as we can. We have some manufacturers that are very transparent and we can see where it's going to hit every day, and some are a bit of black hole."


6 Data Governance Principles Corporate Leaders Should Apply in 2023

The success of your data governance plan depends on what your employees do with the data they handle. Therefore, once you’ve created a data governance plan, you should share it with your employees. Successful data governance requires a holistic, organization-wide approach that demands transparency across your organization. You can further demonstrate internal transparency by documenting all data governance decisions and actions. This documentation can help you learn from past mistakes and protect your corporation if you experience a data breach, lawsuit, investigation, or other regulatory action. ... Responsibility and accountability are integral parts of any corporation’s data governance processes. Traditionally, your information technology (IT) department would be responsible for managing your corporation’s data. But now that most—if not all—of your employees deal with data on a daily basis, employees throughout your organization must see themselves as the stewards of your data. So, who is responsible for what data? That is something you will need to decide. 


Study shows attackers can use ChatGPT to significantly enhance phishing and BEC scams

The more complex and long a phishing message is, the more likely it is that attackers will make grammatical errors or include weird phrasing that careful readers will pick up on and become suspicious. With messages generated by ChatGPT, this line of defense that relies on user observation is easily defeated at least as far as the correctness of the text is concerned. Detecting that a message was written by an AI model is not impossible and researchers are already working on such tools. While these might work with current models and be useful in some scenarios, such as schools detecting AI-generated essays submitted by students, it's hard to see how they can be applied for email filtering because people are already using such models to write business emails and simplify their work. "The problem is that people will probably use these large language models to write benign content as well," WithSecure Intelligence Researcher Andy Patel tells CSO. ... Attackers can take it much further than writing simple phishing lures. They can generate entire email chains between different people to add credibility to their scam.


Insights on Nordic artificial intelligence strategies

The Nordics are generally early adopters of technology – and AI is no exception. More than 25% of the Nordic companies are already investing at least 20% of their research and development budget in AI projects. Moreover, the Nordic countries are planning to get ahead – or at least keep up with other industrial nations. Each of the four countries have at least one top-ranking AI-related educational institution – and private investment in AI has more than doubled in the region since 2021. ... Finnish AI research runs primarily along three different dimensions. The first is to optimise the performance of AI algorithms to head off the problem where computational requirements get too far ahead of what hardware can deliver. As a small country, Finland is particularly sensitive to the increasing costs of computational power – even though they house what is currently Europe’s most powerful supercomputer, LUMI. The second dimension is trustworthy AI. Ethics and values are important to Finland, as they are in all other Nordic countries. Research in trustworthy AI aims to overcome the complex ethical challenges inherent to AI.


Structured Data Management for Discovery and Insight

Polanco says the chief data officer, chief compliance officer, and CISO should collaborate on finding an effective structured data management practice that provides a well-governed, fully-compliant data architecture that connects data sources for data consumers. “Data must be findable, accessible, interoperable, and re-usable for [data] consumers, while also ensuring compliance with data quality standards and data security and privacy measures,” he adds. Anyone in a managerial position who encounters data will likely have considered best practices for data management already. “While those managers may be responsible for implementing data management resources for their respective teams, the initial solution can come from technology companies that weld together the manual knowledge of what the data needs to look like and the efficiency of a more automated sorting process,” Polanco says. Macosky adds that while the chief data officer position is fairly new across industries, he expects to see the role become more important and vital as organizations prioritize and value data management.


How to Measure the Energy Consumption of Bugs

It is very important to always have the underlying architecture and communication to all the connected services in mind. Often it may seem that a bug does not affect energy consumption at first sight. This impression can quickly change when the broader context of the feature where it occurs is taken into account. A QA engineer needs to understand communication between the services, how it is implemented (in collaboration with the developers), when it takes place, where it initiated, and where the services and features run. In practice this means that QA engineers who want to measure the energetic impact of their product in more detail must not only understand the customers’ perspective (as usual), but in addition many implementation details from different perspectives. Where do particular services run? On which infrastructure? Which libraries are used? How can the implementation of the product be modified in order to measure energy consumption. Improvement of energy consumption is not something that can be activated by just pushing a button. 



Quote for the day:

"If you don't demonstrate leadership character, your skills and your results will be discounted, if not dismissed." -- Mark Miller