Daily Tech Digest - July 31, 2021

5 Cybersecurity Tactics To Protect The Cloud

Best practices to protect companies’ operations in the cloud are guided by three fundamental questions. First, who is managing the cloud? Many companies are moving towards a Managed Service Provider (MSP) model that includes the monitoring and management of security devices and systems called Managed Security Service Provider (MSSP). At a basic level, security services offered include managed firewall, intrusion detection, virtual private network, vulnerability scanning and anti-malware services, among others. Second, what is the responsibility shift in this model? There is always a shared responsibility between companies and their cloud infrastructure providers for managing the cloud. This applies to private, public, and hybrid cloud models. Typically, cloud providers are responsible for the infrastructure as a service (IaaS) and platform as a service (PaaS) layers while companies take charge of the application layer. Companies are ultimately responsible for deciding the user management concept for business applications, such as the user identity governance for human resources and finance applications.


Yugabyte CTO outlines a PostgreSQL path to distributed cloud

30 years ago, open source [databases were not] the norm. If you told people, “Hey, here’s an open source database,” they’re going to say, “Okay? What does that mean? What is it? What does it really mean? And why should I be excited?” And so on. I remember because at Facebook I was a part of the team that built an open source database called Cassandra, and we had no idea what would happen. We thought “Okay, here’s this thing that we’re putting out in the open source, and let’s see what happens.” And this is in 2007. Back in that day, it was important to use a restrictive license — like GPL — to encourage people to contribute and not just take stuff from the open source and never give back. So that’s the reason why a lot of projects ended up with GPL-like licenses. Now, MySQL did a really good job in adhering to these workloads that came in the web back then. They were tier two workloads initially. These were not super critical, but over time they became very critical, and the MySQL community aligned really well and that gave them their speed. But over time, as you know, open source has become a staple. And most infrastructure pieces are starting to become open source.


Introducing MVISION Cloud Firewall – Delivering Protection Across All Ports and Protocols

McAfee MVISION Cloud Firewall is a cutting-edge Firewall-as-a-Service solution that enforces centralized security policies for protecting the distributed workforce across all locations, for all ports and protocols. MVISION Cloud Firewall allows organizations to extend comprehensive firewall capabilities to remote sites and remote workers through a cloud-delivered service model, securing data and users across headquarters, branch offices, home networks and mobile networks, with real-time visibility and control over the entire network traffic. The core value proposition of MVISION Cloud Firewall is characterized by a next-generation intrusion detection and prevention system that utilizes advanced detection and emulation techniques to defend against stealthy threats and malware attacks with industry best efficacy. A sophisticated next-generation firewall application control system enables organizations to make informed decisions about allowing or blocking applications by correlating threat activities with application awareness, including Layer 7 visibility of more than 2000 applications and protocols.


How Data Governance Improves Customer Experience

Customer journey orchestration allows an organization to meaningfully modify and personalize a customer’s experience in real-time by pulling in data from many sources to make intelligent decisions about what options and offers to provide. While this sounds like a best-case scenario for customers and company alike, it requires data sources to be unified and integrated across channels and environments. This is where good data governance comes into play. Even though many automation tasks may fall in a specific department like marketing or customer service, the data needed to personalize and optimize any of those experiences is often coming from platforms and teams that span the entire organization. Good data governance helps to unify all of these sources, processes and systems and ensures customers receive accurate and impactful personalization within a wide range of experiences. As you can see, data governance can have a major influence over how the customer experience is delivered, measured and enhanced. It can help teams work better together and help customers get more personalized service.


The Life Cycle of a Breached Database

Our continued reliance on passwords for authentication has contributed to one toxic data spill or hack after another. One might even say passwords are the fossil fuels powering most IT modernization: They’re ubiquitous because they are cheap and easy to use, but that means they also come with significant trade-offs — such as polluting the Internet with weaponized data when they’re leaked or stolen en masse. When a website’s user database gets compromised, that information invariably turns up on hacker forums. There, denizens with computer rigs that are built primarily for mining virtual currencies can set to work using those systems to crack passwords. How successful this password cracking is depends a great deal on the length of one’s password and the type of password hashing algorithm the victim website uses to obfuscate user passwords. But a decent crypto-mining rig can quickly crack a majority of password hashes generated with MD5 (one of the weaker and more commonly-used password hashing algorithms).


Zero Trust Adoption Report: How does your organization compare?

From the wide adoption of cloud-based services to the proliferation of mobile devices. From the emergence of advanced new cyberthreats to the recent sudden shift to remote work. The last decade has been full of disruptions that have required organizations to adapt and accelerate their security transformation. And as we look forward to the next major disruption—the move to hybrid work—one thing is clear: the pace of change isn’t slowing down. In the face of this rapid change, Zero Trust has risen as a guiding cybersecurity strategy for organizations around the globe. A Zero Trust security model assumes breach and explicitly verifies the security status of identity, endpoint, network, and other resources based on all available signals and data. It relies on contextual real-time policy enforcement to achieve least privileged access and minimize risks. Automation and machine learning are used to enable rapid detection, prevention, and remediation of attacks using behavior analytics and large datasets.


Container Technology Complexity Drives Kubernetes as a Service

The reason why managed Kubernetes is now gaining traction is obvious, according to Brian Gracely, senior director product strategy for Red Hat OpenShift. He pointed out that containers and Kubernetes are relatively new technologies, and that managed Kubernetes services are even newer. This means that until recently, companies that wanted or needed to deploy containers and use Kubernetes had no choice but to invest their own resources in developing in-house expertise. "Any time we go through these new technologies, it's early adopters that live through the shortfalls of it, or the lack of features or complexity of it, because they have an immediate problem they're trying to solve," he said. Like Galabov, Graceley thinks that part of the move towards Kubernetes as a Service is motivated by the fact that many enterprises are already leveraging managed services elsewhere in their infrastructure, so doing the same with their container deployments only makes sense. "If my compute is managed, my network is managed and my storage is managed, and we're going to use Kubernetes, my natural inclination is to say, 'Is there a managed version of Kubernetes' as opposed to saying, 'I'll just run software on top of the cloud.'" he said. "That's sort of a normal trend."


NIST calls for help in developing framework managing risks of AI

"While it may be impossible to eliminate the risks inherent in AI, we are developing this guidance framework through a consensus-driven, collaborative process that we hope will encourage its wide adoption, thereby minimizing these risks," Tabassi said. NIST noted that the development and use of new AI-based technologies, products and services bring "technical and societal challenges and risks." "NIST is soliciting input to understand how organizations and individuals involved with developing and using AI systems might be able to address the full scope of AI risk and how a framework for managing these risks might be constructed," NIST said in a statement. NIST is specifically looking for information about the greatest challenges developers face in improving the management of AI-related risks. NIST is also interested in understanding how organizations currently define and manage characteristics of AI trustworthiness. The organization is similarly looking for input about the extent to which AI risks are incorporated into organizations' overarching risk management, particularly around cybersecurity, privacy and safety.


Studies show cybersecurity skills gap is widening as the cost of breaches rises

The worsening skills shortage comes as companies are adopting breach-prone remote work arrangements in light of the pandemic. In its report today, IBM found that the shift to remote work led to more expensive data breaches, with breaches costing over $1 million more on average when remote work was indicated as a factor in the event. By industry, data breaches in health care were most expensive at $9.23 million, followed by the financial sector ($5.72 million) and pharmaceuticals ($5.04 million). While lower in overall costs, retail, media, hospitality, and the public sector experienced a large increase in costs versus the prior year. “Compromised user credentials were most common root cause of data breaches,” IBM reported. “At the same time, customer personal data like names, emails, and passwords was the most common type of information leaked — a dangerous combination that could provide attackers with leverage for future breaches.” IBM says that it found that “modern” security approaches reduced expenses, with AI, security analytics, and encryption being the top three mitigating factors.


Exploring BERT Language Framework for NLP Tasks

An open-source machine learning framework, BERT, or bidirectional encoder representation from a transformer is used for training the baseline model of NLP for streamlining the NLP tasks further. This framework is used for language modeling tasks and is pre-trained on unlabelled data. BERT is particularly useful for neural network-based NLP models, which make use of left and right layers to form relations to move to the next step. BERT is based on Transformer, a path-breaking model developed and adopted in 2017 to identify important words to predict the next word in a sentence of a language. Toppling the earlier NLP frameworks which were limited to smaller data sets, the Transformer could establish larger contexts and handle issues related to the ambiguity of the text. Following this, the BERT framework performs exceptionally on deep learning-based NLP tasks. BERT enables the NLP model to understand the semantic meaning of a sentence – The market valuation of XX firm stands at XX%, by reading bi-directionally (right to left and left to right) and helps in predicting the next sentence.



Quote for the day:

“Whenever you find yourself on the side of the majority, it is time to pause and reflect.” -- Mark Twain

Daily Tech Digest - July 30, 2021

Five steps towards cloud migration for a remote workforce

Reducing costs is one of the key reasons many businesses move to the cloud, with a Microsoft survey identifying this as a top benefit of cloud migration. However, the cost of the migration project itself also needs to be taken into consideration. Some businesses will undertake this exercise in-house if they have an IT team that is big and experienced enough to take on the project or to keep costs low. But if your internal IT support team is small or you already take out managed IT services, we recommend utilising a third-party provider. A business with expertise in cloud consultancy will manage the entire process for you and ensure that your migration goes as smoothly as possible. Their extensive experience in deploying cloud solutions and cloud migrations means you’ll experience a smoother journey to cloud computing. While carrying out this project in-house may seem more cost-effective on the face of it, cloud experts will help you to reduce costs by considering every possibility and mitigating any potential risks. Moving workloads to the cloud is an essential step for businesses that are looking to reduce IT operating costs, increase security, and improve efficiency and productivity.


Cloud Security Basics CIOs and CTOs Should Know

Cloud environments have proven not to be inherently secure (as originally assumed). For the past several years, there have been active debates about whether cloud is more or less secure than a data center, particularly as companies move further into the cloud. Highly regulated companies tend to control their most sensitive data and assets from within their data centers and have moved less-critical data and workloads to cloud. On the flip side Amazon, Google, and Microsoft spend considerably more on security than the average enterprise, and for that reason, some believe cloud environments more secure than on-premises data centers. "AWS, Microsoft, and Google are creators of infrastructure and application deployment platforms. They're not security companies," said Richard Bird, chief customer information officer at multi-cloud identify solution provider Ping Identity. "The Verizon Database Incident Report says about 30% of all breaches are facilitated by human error. That same 30% applies to AWS, Microsoft, and Google. [Cloud] cost reductions don't come with a corresponding decrease in risk."


How To Defend Yourself Against The Powerful New NSO Spyware Attacks

Unlike infection attempts which require that the target perform some action like clicking a link or opening an attachment, zero-click exploits are so called because they require no interaction from the target. All that is required is for the targeted person to have a particular vulnerable app or operating system installed. Amnesty International’s forensic report on the recently revealed Pegasus evidence states that some infections were transmitted through zero-click attacks leveraging the Apple Music and iMessage apps. This is not the first time NSO Group’s tools have been linked to zero-click attacks. A 2017 complaint against Panama’s former President Ricardo Martinelli states that journalists, political figures, union activists, and civic association leaders were targeted with Pegasus and rogue push notifications delivered to their devices, while in 2019 WhatsApp and Facebook filed a complaint claiming NSO Group developed malware capable of exploiting a zero-click vulnerability in WhatsApp. As zero-click vulnerabilities by definition do not require any user interaction, they are the hardest to defend against.


Distributed DevOps Teams: Enabling Non-Stop Delivery

An important element of most DevOps teams is cultural integration; learning about and from each other, establishing the psychological safety within the team to fail in front of your peers, the proverbial finishing of each other’s sentences… it’s simply harder to establish this level of cultural cohesiveness when you are working in distributed teams. Leaders are also challenged; how do they recognize when a team member needs help, needs to be prompted, or requires clearer direction without the body language cues or without any interaction at all, if they are in completely different time zones? As a leader, recognizing when to intervene, when to support, and when to engage is challenging when the team is delivering outside of view. Trust becomes crucial between all team members. This particular organization is currently considering "time zone rotation" so that team members can establish working relationships and trust outside of their own normal working time group.


Building A Secure Cloud Infrastructure For Strong Data Protection

Sometimes the terms “security” and “privacy” are used interchangeably, but it is vital to understand the nuances between the two when building a secure cloud infrastructure. Data privacy is associated with ensuring that personally identifiable information (PII) stored in the cloud is hidden. Privacy regulations, such as the EU’s GDPR and the California Consumer Privacy Act (CCPA), dictate what data is considered private and that the data remains pseudonymized at all times. Data security, on the other hand, pertains to specific protections that have to be built into the infrastructure to prevent data from being stolen. Building a secure cloud infrastructure is predicated upon understanding the right mix of privacy and security measures, which can vary based on an organization’s industry and the specific regulations to which it must adhere. Many organizations aren’t clear on how to protect data in the cloud. The natural assumption is that the cloud provider will handle security, but that is not the case. When migrating to the cloud, most providers lay out a shared responsibility model for protection, meaning the provider is responsible for specific security areas and the company is responsible for others.


7 Best Soft Skills That Make a Great Software Developer

Everyone can talk, but not everyone can communicate. Being a software developer means understanding a whole new language: the language of code, with all the acronyms and technical terms that come with it. These terms may seem simple to you, but will all your colleagues understand them Work on your communication skills by considering carefully the language you use and tailoring it to your audience. Could you explain agile software testing to a computing novice, for example? By honing your communication soft skills you can reach out to more people. These first two soft skills go hand in hand: to be a great communicator, you also have to be a great listener. Remember that everyone you work with and speak to deserves to be listened to, and they may have information that will make your job easier. Put distractions to one side, and concentrate completely on the person who’s talking to you. Keep an eye out for non-verbal communication signs too, as they can often reveal as much as what a person is saying.


McAfee: Babuk ransomware decryptor causes encryption 'beyond repair'

"It seems that Babuk has adopted live beta testing on its victims when it comes to its Golang binary and decryptor development. We have seen several victims' machines encrypted beyond repair due to either a faulty binary or a faulty decryptor," Seret and Keijzer said. "Even if a victim gave in to the demands and was forced to pay the ransom, they still could not get their files back. We strongly hope that the bad coding also affects Babuk's relationship with its affiliates. The affiliates perform the actual compromise and are now faced with a victim who cannot get their data back even if they pay. This essentially changes the crime dynamic from extortion to destruction, which is much less profitable from a criminal's point of view." The typical Babuk attack features three distinct phases: initial access, network propagation, and action on objectives. Babuk also operated a ransomware-as-a-service model before shutting down in April. Northwave investigated a Babuk attack that was perpetrated through the CVE-2021-27065 vulnerability also being exploited by the HAFNIUM threat actor.


Cisco preps now for the hybrid workforce

The lasting impact of remote work is resulting in a reassessment of the IT infrastructure that shifts buyer requirements to demand work-anywhere capabilities, said Ranjit Atwal, senior research director at Gartner. “Through 2024, organizations will be forced to bring forward digital business transformation plans by at least five years,” Atwal said. “Those plans will have to adapt to a post-COVID-19 world that involves permanently higher adoption of remote work and digital touchpoints,” Digital products and services will play a big role in these digital transformation efforts, Atwal stated. “This longer strategic plan requires continued investment in strategic remote-first technology continuity implementations along with new technologies such as hyperautomation, AI and collaboration technologies to open up more flexibility of location choice in job roles,” Atwal stated. The hybrid workforce will need every technology from SD-WAN and SASE to a full stack collaboration suite--in Cisco's case WebEx--and best-in-class security and Wi-Fi and failover options, Nightingale said.


Silver linings: 7 ways CIOs say IT has changed for good

A positive change was the unbridled collaboration and coming together – without traditional borders or silos – to solve the exceptional challenges the pandemic threw at us. COVID-19 triggered physical social distancing while at the same time it bolstered digital connectedness and accelerated a culture shift to a more flexible work model. There was a pervasive focus on the wellbeing of each individual, and an intentional effort to hear from each person, which further diversifies input and insights to solve problems. The Sappi team came together in this manner and continues to carry forward those positive elements of inclusive and optimistic collaboration to navigate each effort with confidence that we will have a thriving future ahead. ... From the start of the pandemic, we leveraged these competencies and our fortitude to successfully solve business challenges and meet our goals and objectives. The demand for digital experiences and customers’ expectations for seamless digital offerings continues to increase, and MassMutual’s digital and technology advancements and digital-first mindset allow us to offer more modern tools at lower costs and provide an overall better customer experience.


What should IT leaders look for in an SD-WAN solution?

Delivering high performance, affordable SD-WAN solutions is not something everyone can do. For that reason, when an IT leader complains of connectivity speeds, the easier option is for providers to simply recommend more bandwidth. And, with the cost of circuits falling, it’s hard to push back on this apparent resolution. However, for many businesses, traditional networks will no longer be fit for purpose. We’re not all in the same network anymore, so it’s not a case of routing all the traffic into one place, through a huge firewall, and back out. The SD-WAN alternative sounds complex, and it really is – we’re talking an intelligent, responsive, end-to-end encrypted network with AI at its heart, after all. However, from the IT leader’s perspective, it is deployed with zero touch provisioning, no hardware installations, and self-configuration for ultimate ease. IT teams are here to deliver IT services, after all. They don’t want to be held back by infrastructure constraints. It’s about time that tech enabled them to do their jobs.



Quote for the day:

"Nothing is so potent as the silent influence of a good example." -- James Kent

Daily Tech Digest - July 29, 2021

How enterprise architects need to evolve to survive in a digital world

Traditionally, enterprise architects needed to be able to translate business needs into IT requirements or figure out how to negotiate a better IT system deal. That’s still important, but now they also need to be able to talk to board members and executive teams about the business implications of technology decisions, particularly around M&A. If the CEO wants to be able to acquire and divest new companies every year, the enterprise architect needs to explain the system landscape that requires, and in a merger context, what systems to merge and how. If the company invests in a new enterprise resource planning (ERP) system, the enterprise architect should be able to articulate the implications and the effect on the P&L. This level of conversation cannot be based on boxes and diagrams on PowerPoint, which is often the default but a largely theoretical approach. Instead, enterprise architects have to be able to use practical “business” language to communicate and articulate the ROI of architecture decisions and how they contribute to business-outcome key performance indicators.


New Android Malware Uses VNC to Spy and Steal Passwords from Victims

"The actors chose to steer away from the common HTML overlay development we usually see in other Android banking Trojans: this approach usually requires a larger time and effort investment from the actors to create multiple overlays capable of tricking the user. Instead, they chose to simply record what is shown on the screen, effectively obtaining the same end result." ... What's more, the malware employs ngrok, a cross-platform utility used to expose local servers behind NATs and firewalls to the public internet over secure tunnels, to provide remote access to the VNC server running locally on the phone. Additionally, it also establishes connections with a command-and-control (C2) server to receive commands over Firebase Cloud Messaging (FCM), the results of which, including extracted data and screen captures, are then transmitted back to the server. ThreatFabric's investigation also connected Vultur with another well-known piece of malicious software named Brunhilda, a dropper that utilizes the Play Store to distribute different kinds of malware in what's called a "dropper-as-a-service" (DaaS) operation, citing overlaps in the source code and C2 infrastructure used to facilitate attacks.


DevOps still 'rarely done well at scale' concludes report after a decade of research

A cross-functional team is one that spans the whole application lifecycle from code to deployment, as opposed to a more specialist team that might only be concerned with database administration, for example. Are cross-functional teams a good thing? "It depends," Kersten said. "There are underlying strata of technology that are better off centralized, particularly if you've got regulatory burdens, but that doesn't mean you shouldn't have cross-functional teams … too far in either direction is definitely terrible. The biggest problem we see is if there isn't a culture of sharing practices amongst each other." One thing to avoid, said Kersten, is a DevOps team. "I think we've broken the term DevOps team inside organisations," he told us. "I think it has passed beyond useful … calling your folk DevOps engineers or cloud engineers, these sorts of imprecise titles are not particularly useful, and DevOps is particularly broken." What if an organization reads the report and realises that it is not good at public cloud and not effective at DevOps, what should it do? "First optimize for the team," said Kersten.


DeepMind Launches Evaluation Suite For Multi-Agent Reinforcement Learning

Melting Pot is a new evaluation technique that assesses generalisation to novel situations that consist of known and unknown individuals. It can test a broad range of social interactions such as cooperation, deception, competition, trust, reciprocation, stubbornness, etc. Unlike multi-agent reinforcement learning (MARL) that lacks a broadly accepted benchmark test, single-agent reinforcement learning (SARL) has a diverse set of benchmarks suitable for different purposes. Further, MARL has a relatively less favourable evaluation landscape compared to other machine learning subfields. Melting pot offers a set of 21 MARL multi-agent games or ‘substrates’ to train agents on and more than 85 unique test scenarios for evaluating these agents. A central equation– Substrate+Background Population=Scenario–captures the true essence of the Melting pot technique. The term substrate refers to a partially observable general sum Markov game; a Melting Pot substrate is a game of imperfect information that each player possesses which is unknown to their co-players. It includes the layout of the map, how objects are located, and how they move. 


What to Look for When Scaling Your Data Team

Today, data-driven innovation has become a strategic imperative for just about every company, in every industry. But as organizations expand their investment in analytics, AI/ML, business intelligence, and more, data teams are struggling to keep pace with the expectations of the business. Businesses will only continue to rely more heavily on their data teams. However, recent survey research suggests that 96% of data teams are already at or over their work capacity. To avoid leaving their teams in a lurch, many organizations will need to significantly scale their data team’s operations, both in terms of efficiencies and team size. In fact, 79% of data teams indicated that infrastructure is no longer the scaling problem — this puts the focus on people and team capacity. But what should managers look for when growing their teams? And what tools can provide relief for their already overburdened staff? The first step that managers of data teams must do is to evaluate their teams’ current skills in close alignment with the projected needs of the business. Doing so can provide managers with a deeper understanding of what skill sets to look for when interviewing candidates.


Eight Signs Your Agile Testing Isn’t That Agile

When you have a story in a sprint, and you find an issue with that story, what do you do? For many teams, the answer is still “file a defect.” In waterfall development, test teams would get access to a new build with new features all at once. They would then start a day-, week-, or even month-long testing cycle. Given the amount of defects that would be found and the time duration between discovery and fixing, it was critical to document every single one. This documentation is not necessary in Agile development. When you find an issue, collaborate with the developer and get the issue fixed, right then and there, in the same day or at least in the same sprint. If you need to persist information about the defect, put it in the original story. There is no need to introduce separate, additional documentation. There are only two reasons you should create a defect. One: an issue was found for previously completed work, or for something that is not tied to any particular story. This issue needs to be recorded as a defect and prioritized. (But, see next topic!) 


Mitre D3FEND explained: A new knowledge graph for cybersecurity defenders

D3FEND is the first comprehensive examination of this data, but assembling it wasn’t without its difficulties. Using the patent database as original source material for this project was both an inspiration and a frustration. Kaloroumakis got the idea when he had to review patent filings when he was CTO of Bluvector.io, a security company, before he came to Mitre. “There is an incredible variance in technical specifics across the patent collection,” he says. “With some patents, little is left to your imagination, but others are more generic and harder to figure out.” He was surprised at the thousands of cybersecurity patent filings he found. “Some vendors have more than a hundred filings,” he said and noted that he has not cataloged every single cybersecurity patent in the collection. Instead, he has used the collection as a means to an end, to create the taxonomies and knowledge graph for the project. He also wanted to emphasize that just because a technology or a particular security method is mentioned in a patent filing doesn’t mean that this method actually finds its way into the actual product.


Benefits of Loosely Coupled Deep Learning Serving

Another convincing aspect of choosing a message-mediated DL serving is its easy adaptability. There exists a learning curve for any web framework and library even for micro-frameworks such as Flask if one wants to exploit its full potential. On the other hand, one does not need to know the internals of messaging middleware, furthermore all major cloud vendors provide their own managed messaging services that take maintenance out of the engineers’ backlog. This also has many advantages in terms of observability. As messaging is separated from the main deep learning worker with an explicit interface, logs and metrics can be aggregated independently. On the cloud, this may not be even needed as managed messaging platforms handle logging automatically with additional services such as dashboards and alerts. The same queuing mechanism lends itself to auto-scalability natively as well. Stemming from high observability, what queuing brings is the freedom to choose how to auto-scale the workers. In the next section, an auto-scalable container deployment of DL models will be shown using KEDA


Should You Trust Low Code/No Code for Mission-Critical Applications?

More enterprises now understand the value of low code and no code, though the differences between those product categories are worth considering. Low code is aimed at developers and power users. No code targets non-developers working in lines of business. The central idea is to get to market faster than is possible with traditional application development. ... In some cases, it makes a lot of sense to use low code, but not always. In Frank's experience, an individual enterprise's requirements tend to be less unique than the company believes and therefore it may be wiser to purchase off-the-shelf software that includes maintenance. For example, why build a CRM system when Salesforce offers a powerful one? In addition, Salesforce employs more developers than most enterprises. About six years ago, Bruce Buttles, digital channels director at health insurance company Humana, was of the opinion that low code/no code systems "weren't there yet," but he was ultimately proven wrong. "I looked at them and spent about three months building what would be our core product, four or five different ways using different platforms. I was the biggest skeptic," said Buttles.


Confidence redefined: The cybersecurity industry needs a reboot

As businesses continue to adjust to the virtual and flex workplace, a common fear is loss of productivity and, ultimately, damage to their bottom line. While many enterprises were already on a “digital transformation” journey, this new dynamic has added the need for fresh thinking. As a result, many organizations are implementing new applications to ensure day-to-day activities remain seamless, but are unknowingly — or, in some cases, knowingly — sacrificing security in the process. This is an expansive area of risk for many businesses. Truth be told, the human (and even non-human) workforce will always come with a certain risk level, but now a distributed workforce often provides malicious actors with more opportunities to do their dirty work; most organizations have created a larger “attack surface” as a result of the pandemic. To allow their businesses to thrive going forward, the key for leaders in both IT and business is to focus on enablement and security – providing access to important technology and tools but properly controlling access to keep your business and your customers’ critical assets protected.



Quote for the day:

"Leadership is familiar, but not well understood." -- Gerald Weinberg

Daily Tech Digest - July 28, 2021

DevOps Is Dead, Long Live AppOps

The NoOps trend aims to remove all the frictions between development and the operation simply removing it, as the name tells. This may seem a drastic solution, but we do not have to take it literally. The right interpretation — the feasible one — is to remove as much as possible the human component in the deployment and delivery phases. That approach is naturally supported by the cloud that helps things to work by themself. ... One of the most evident scenarios that explain the benefit of AppOps is every application based on Kubernetes. If you will open each cluster you will find a lot of pod/service/deployment settings that are mostly the same. In fact, every PHP application has the same configuration, except for parameters. Same for Java, .Net, or other applications. The matter is that Kubernetes is agnostic to the content of the host's applications, so he needs to inform it about every detail. We have to start from the beginning for all new applications even if the technology is the same. Why? I should explain only once how a PHP application is composed. 


Thrill-K: A Blueprint for The Next Generation of Machine Intelligence

Living organisms and computer systems alike must have instantaneous knowledge to allow for rapid response to external events. This knowledge represents a direct input-to-output function that reacts to events or sequences within a well-mastered domain. In addition, humans and advanced intelligent machines accrue and utilize broader knowledge with some additional processing. I refer to this second level as standby knowledge. Actions or outcomes based on this standby knowledge require processing and internal resolution, which makes it slower than instantaneous knowledge. However, it will be applicable to a wider range of situations. Humans and intelligent machines need to interact with vast amounts of world knowledge so that they can retrieve the information required to solve new tasks or increase standby knowledge. Whatever the scope of knowledge is within the human brain or the boundaries of an AI system, there is substantially more information outside or recently relevant that warrants retrieval. We refer to this third level as retrieved external knowledge.


GitHub’s Journey From Monolith to Microservices

Good architecture starts with modularity. The first step towards breaking up a monolith is to think about the separation of code and data based on feature functionalities. This can be done within the monolith before physically separating them in a microservices environment. It is generally a good architectural practice to make the code base more manageable. Start with the data and pay close attention to how they’re being accessed. Make sure each service owns and controls access to its own data, and that data access only happens through clearly defined API contracts. I’ve seen a lot of cases where people start by pulling out the code logic but still rely on calls into a shared database inside the monolith. This often leads to a distributed monolith scenario where it ends up being the worst of both worlds - having to manage the complexities of microservices without any of the benefits. Benefits such as being able to quickly and independently deploy a subset of features into production. Getting data separation right is a cornerstone in migrating from a monolithic architecture to microservices. 


Data Strategy vs. Data Architecture

By being abstracted from the problem solving and planning process, enterprise architects became unresponsive, he said, and “buried in the catacombs” of IT. Data Architecture needs to look at finding and putting the right mechanisms in place to support business outcomes, which could be everything from data systems and data warehouses to visualization tools. Data architects who see themselves as empowered to facilitate the practical implementation of the Business Strategy by offering whatever tools are needed will make decisions that create data value. “So now you see the data architect holding the keys to a lot of what’s happening in our organizations, because all roads lead through data.” Algmin thinks of data as energy, because stored data by itself can’t accomplish anything, and like energy, it comes with significant risks. “Data only has value when you put it to use, and if you put it to use inappropriately, you can create a huge mess,” such as a privacy breach. Like energy, it’s important to focus on how data is being used and have the right controls in place. 


Why CISA’s China Cyberattack Playbook Is Worthy of Your Attention

In the new advisory, CISA warns that the attacks will also compromise email and social media accounts to conduct social engineering attacks. A person is much more likely to click on an email and download software if it comes from a trusted source. If the attacker has access to an employee's mailbox and can read previous messages, they can tailor their phishing email to be particularly appealing – and even make it look like a response to a previous message. Unlike “private sector” criminals, state-sponsored actors are more willing to use convoluted paths to get to their final targets, said Patricia Muoio, former chief of the NSA’s Trusted System Research Group, who is now general partner at SineWave Ventures. ... Private cybercriminals look for financial gain. They steal credit card information and health care data to sell on the black market, hijack machines to mine cryptocurrencies, and deploy ransomware. State-sponsored attackers are after different things. If they plan to use your company as an attack vector to go after another target, they'll want to compromise user accounts to get at their communications. 


Breaking through data-architecture gridlock to scale AI

Organizations commonly view data-architecture transformations as “waterfall” projects. They map out every distinct phase—from building a data lake and data pipelines up to implementing data-consumption tools—and then tackle each only after completing the previous ones. In fact, in our latest global survey on data transformation, we found that nearly three-quarters of global banks are knee-deep in such an approach.However, organizations can realize results faster by taking a use-case approach. Here, leaders build and deploy a minimum viable product that delivers the specific data components required for each desired use case (Exhibit 2). They then make adjustments as needed based on user feedback. ... Legitimate business concerns over the impact any changes might have on traditional workloads can slow modernization efforts to a crawl. Companies often spend significant time comparing the risks, trade-offs, and business outputs of new and legacy technologies to prove out the new technology. However, we find that legacy solutions cannot match the business performance, cost savings, or reduced risks of modern technology, such as data lakes. 


Data-Intensive Applications Need Modern Data Infrastructure

Modern applications are data-intensive because they make use of a breadth of data in more intricate ways than anything we have seen before. They combine data about you, about your environment, about your usage and use that to predict what you need to know. They can even take action on your behalf. This is made possible because of the data made available to the app and data infrastructure that can process the data fast enough to make use of it. Analytics that used to be done in separate applications (like Excel or Tableau) are getting embedded into the application itself. This means less work for the user to discover the key insight or no work as the insight is identified by the application and simply presented to the user. This makes it easier for the user to act on the data as they go about accomplishing their tasks. To deliver this kind of application, you might think you need an array of specialized data storage systems, ones that specialize in different kinds of data. But data infrastructure sprawl brings with it a host of problems.  


The Future of Microservices? More Abstractions

A couple of other initiatives regarding Kubernetes are worth tracking. Jointly created by Microsoft and Alibaba Cloud, the Open Application Model (OAM) is a specification for describing applications that separate the application definition from the operational details of the cluster. It thereby enables application developers to focus on the key elements of their application rather than the operational details of where it deploys. Crossplane is the Kubernetes-specific implementation of the OAM. It can be used by organizations to build and operate an internal platform-as-a-service (PaaS) across a variety of infrastructures and cloud vendors, making it particularly useful in multicloud environments, such as those increasingly commonly found in large enterprises through mergers and acquisitions. Whilst OAM seeks to separate out the responsibility for deployment details from writing service code, service meshes aim to shift the responsibility for interservice communication away from individual developers via a dedicated infrastructure layer that focuses on managing the communication between services using a proxy. 


Navigating data sovereignty through complexity

Data sovereignty is the concept that data is subject to the laws of the country which it is processed in. In a world where there is a rapid adoption of SaaS, cloud and hosted services, it becomes obvious to see the issues that data sovereignty can have. In simpler times, data wasn’t something businesses needed to be concerned about and could be shared and transferred freely with no consequence. Businesses that also had a digital presence operated on a small scale and with low data demands hosted on on-premise infrastructure. This meant that data could be monitored and kept secure, much different from the more distributed and hybrid systems that many businesses use today. With so much data sharing and lack of regulation, it all came crashing down with the Cambridge Analytica scandal in 2016, promoting strict laws on privacy. ... When dealing with on-premise infrastructure, governance is clearer, as it must follow the rules of the country it’s in. However, when it’s in the cloud, a business can store its data in any number of locations regardless of where the business itself is.


How security leaders can build emotionally intelligent cybersecurity teams

EQ is important, as it has been found by Goleman and Cary Cherniss to positively influence team performance and to cultivate positive social exchanges and social support among team members. However, rather than focusing on cultivating EQ, cybersecurity leaders such as CISOs and CIOs are often preoccupied by day-to-day operations (e.g., dealing with the latest breaches, the latest threats, board meetings, team meetings and so on). In doing so, they risk overlooking the importance of the development and strengthening of their own emotional intelligence (EQ) and that of the individuals within their teams. As well as EQ considerations, cybersecurity leaders must also be conscious of the team’s makeup in terms of gender, age and cultural attributes and values. This is very relevant to cybersecurity teams as they are often hugely diverse. Such values and attributes will likely introduce a diverse set of beliefs defined by how and where an individual grew up and the values of their parents. 



Quote for the day:

"The mediocre leader tells The good leader explains The superior leader demonstrates The great leader inspires." -- Buchholz and Roth

Daily Tech Digest - July 27, 2021

How AI Algorithms Are Changing Trading Forever

The Aite Group in its report "Hedge Fund Survey, 2020: Algorithmic Trading" argues that the main reason for the growing popularity of algorithms in trading is to try to reduce the influence of the human factor on the market due to its high volatility. The economic fallout from COVID-19 has seen a record-breaking drop in the American, European, and Chinese stock markets. And only a few months later, measures to stimulate the economy were able to stop the fall and reverse the downtrend up. Thus, we get the first task of Algorithmic Trading - risk reduction in a market with high volatility. The second global advantage of algorithmic trading lies in the ability to analyze the potential impact of trade on the market. This can be especially useful for Hedge Funds and institutional investors who handle large sums of money with a visible effect on price movements. The third fundamental advantage of trading algorithms is protection from emotions. Traders and investors, like all living people, experience the emotions of fear, greed, lost profits, and others. These emotions have a negative impact on performance and results.


How to prevent corporate credentials ending up on the dark web

Employees are the weakest link in any organization’s security posture. A Tessian report found that 43% of US and UK employees have made mistakes that resulted in cybersecurity repercussions for their organizations. Phishing scams, including emails that try to trick employees into sharing corporate login details, are particularly common. Educating employees on cyber threats and how to spot them is crucial to mitigating attacks. However, for training to be effective, it needs to consist of more than just repetitive lectures. In the report mentioned above, 43% of respondents said a legitimate-looking email was the reason they fell for a phishing scam, while 41% of employees said they were fooled because the email looked like it came from higher up. Live-fire security drills can help employees familiarize themselves with real-world phishing attacks and other password hacks. Safety awareness training should also teach workers the importance of good practices like using a virtual private network (VPN) when working from home and making social media accounts private.


IT leadership: 4 ways to find opportunities for improvement

Technology leaders should regularly use their own technology to better identify pain points and opportunities for improvements. That means that I should be teaching and using the same systems that faculty does to understand their experience through their lens. I should be meeting regularly with them and generating a Letterman-style Top 10 list of the things I hate most about my technology experience. This is something to do with the students, too. What do they hate most about the technology at the university? And how can we partner with them to address these issues over the next 12 months? Several years ago, for example, we reexamined our application process. If a prospective student wanted to submit an application, we required them to generate a unique username and password. If the one they chose was already taken, they needed to continue creating alternate versions until they eventually landed upon one that was available. If someone began the application process and logged off to complete it later, then forgot their username and password, they’d have to start all over again. It was absurd.


Data Management In The Age Of AI

AI is increasingly converging the traditional high-performance computing and high-performance data analytics pipelines, resulting in multi-workload convergence. Data analytics, training and inference are now being run on the same accelerated computing platform. Increasingly, the accelerated compute layer isn’t limited to GPUs⁠—it now involves FPGAs, graph processors and specialized accelerators. Use cases are moving from computer vision to multi-modal and conversational AI, and recommendation engines are using deep learning while low-latency inference is used for personalization on LinkedIn, translation on Google and video on YouTube. Convolutional neural networks (CNN) are being used for annotation and labeling to transfer learning. And learning is moving to federated learning and active learning, while deep neural networks (DNN) are becoming even more complex with billions of hyper-parameters. The result of these transitions is different stages within the AI data pipelines, each with distinct storage and I/O requirements.


SASE: Building a Migration Strategy

Gartner's analysts say that "work from anywhere" and cloud-based computing have accelerated cloud-delivered SASE offerings to enable anywhere, anytime secure access from any device. Security and risk management leaders should build a migration plan from the legacy perimeter and hardware-based offerings to a SASE model. One hindrance to SASE adoption, some security experts tell me, is that organizations lack visibility into sensitive data and awareness of threats. Too many enterprises have separate security and networking teams that don't share information and lack an all-encompassing security strategy, they say. "While the vendors are touting SASE as the end-all solution, the key to success would depend upon how well we define the SASE operating model, particularly when there are so many vendors coming up with SASE-based solutions," says Bengaluru-based Sridhar Sidhu, senior vice president and head of the information security services group at Wells Fargo. Yask Sharma, CISO of Indian Oil Corp., says that as data centers move to the cloud, companies need to use SASE to enhance security while controlling costs.


How to Bridge the Gap between Netops and Secops

If you were designing the managerial structure for a software development firm from scratch today, it’s very unlikely that you would separate NetOps and SecOps in the first place. Seen from the perspective of 2021, many of the monitoring and visibility tools that both teams seek and use seem inherently similar. Unfortunately, however, the historical development of many firms has not been that simple. Teams and remits are not designed from the ground up with rationality in mind – instead they emerge from a complex series of interactions and ever-changing priorities. This means that different teams often end up with their own priorities, and can come to believe that they are more important than those of other parts of your organization. This is seen very clearly in the distinction between SecOps and NetOps teams in many firms. At the executive level, your network exists in order to facilitate connections – between systems and applications but above all between staff members. Yet for many NetOps teams, the network can come to be seen as an end in itself.


The future of data science and risk management

“Enterprise data is growing nearly exponentially, and it is also increasing in complexity in terms of data types,” said Morgan. “We have gone way past the time when humans could sift through this amount of data in order to see large-scale trends and derive actionable insights. The platforms and best practices of data science and data analytics incorporate technologies which automate the analytics workflows to a large extent, making dataset size and complexity much easier to tackle with far less effort than in years past. “The second value-add is to leverage machine learning, and ultimately artificial intelligence, to go beyond historical and near-real-time trend analysis and ‘look into the future’, so to speak. Predictive analysis can unveil new customer needs for products and services and then forecast consumer reactions to resultant offers. Equally, predictive analytics can help uncover latent anomalies that lead to much better predictions about fraud detection and potentially risky behaviour. “Nothing can foretell the future with 100% certainty, but the ability of modern data science to provide scary-smart predictive analysis goes well beyond what an army of humans could do manually.”


DevOps Observability from Code to Cloud

DevOps has transformed itself in the last few years, completely changing from what we used to see as siloed tools connected together to highly integrated, single-pane-of-glass platforms. Collaboration systems like JIRA, Slack, and Microsoft Teams are connected to your observability tools such as Datadog, Dynatrace, Splunk, and Elastic. Finally, IT Service management tools like PagerDuty are also connected in. Tying these high-in-class tools together on one platform, such as the JFrog Platform, yields high value to the enterprises looking for observability workflow. The security folks also need better visibility into an enterprise’s systems, to look for vulnerabilities. A lot of this information is available in Artifactory and Amazon Web Services‘ Xray, but how do we leverage this information in other partner systems like JIRA and Datadog? It all starts with JFrog Xray’s security impact, where we can generate the alert to Slack and robust security logs to Datadog to be analyzed by your Site Reliability Engineer. A PagerDuty incident that’s also generated from Xray can then be used to create a JIRA issue quickly.


6 Global Megatrends That Are Impacting Banking’s Future

The line between digital and physical has blurred, with consumers who once preferred brick-and-mortar engagements now researching, shopping and buying using digital channels more than ever. This trend is expected to increase across all industries. While organizations have enabled improved digital engagement over the past several months, there are still major pain points, mostly with speed, simplicity and cross-channel integration during the ‘first mile’ of establishing a relationship. The retail industry already understands that consumers are becoming increasingly impatient, wanting the convenience and transparency of eCommerce and the service and humanization of physical stores. In banking, consumers are diversifying their financial relationships, moving to fintech and big tech providers that can open relationships in an instant and personalize experiences. According to Brett King, founder of Moven and author of the upcoming book, ‘The Rise of Technosocialism’, “The ability to acquire new customers at ‘digital scale’ will impact market share and challenge existing budgets for branches. ..."


Understanding Contextual AI In The Modern Business World

Contextual AI can be divided into three pillars that help make businesses become more visible to the people they want to reach. In the same sense, when a business is looking for a partner, it has to be sure that a prospect can offer the right services to fulfill its goals. Contextual AI aims to deliver that. The technology allows a brand to enhance its understanding of consumer interests. It is easy to make assumptions about consumer interests in different sectors, but difficult to prove them. ... In previous years, contextual AI was seen as an enhancing technology, but not an essential one. Now, the recognition of contextual AI as more than simply enhancing is growing. Businesses are constantly looking for more cost-effective solutions to their problems, and contextual AI offers one solution to fit that bracket. If you look at a similar alternative, such as behavioral advertising, it is heavily reliant on data — and lots of it. The huge amounts of data required to make this a success means that businesses have to implement a successful collection, analysis and then reporting solution in order to leverage it effectively. This can be a costly process if a business does not have large economies of scale.



Quote for the day:

"A true dreamer is one who knows how to navigate in the dark" -- John Paul Warren

Daily Tech Digest - July 26, 2021

CIOs and CFOs: creating a value-driven partnership

The CFO/CIO relationship is evolving in the UK and elsewhere in the world. The digitisation of everything is forcing both functions to recognise that technology is not just integral to running the business efficiently, but also permeates every aspect of business strategy and how companies define competitive advantage. Consequently, technology is exerting much greater influence on the way CFOs and CIOs think about their roles and how they define value for their organisations. ... “Technology is expanding the roles that CFOs and CIOs play in an organisation…”. It implies the need for closer collaboration between IT and finance in this country. If both roles collaborate and ask meaningful questions of each other, their shared expertise will enable them to better understand their contribution to delivering value for the business and how their combined skillsets can leverage the benefits of digitisation to become more productive. Yet, not all is sweetness and success, because traditionally both functions have come from very different standpoints when it comes to what value means to their organisations: “While the CFO-CIO relationship is interconnected, sometimes it can become divided, as both often speak different ‘languages’ about the same topic”.


Ignore API security at your peril

Many organizations are quick to embrace the potential and possibilities of connected devices and apps. However, they frequently neglect to put in place the right technology and processes needed to make their APIs secure. Understanding APIs in terms of private/partner/public differences and understanding that these are not the same as internal/external is just the start. Organizations should have both an API strategy and a well-managed API management platform in place so that before teams expose APIs to anybody, a thorough security review is undertaken before rolling out certain API designs. Similarly, any identified issue needs to be handled in a highly structured way. This includes conducting a full assessment of the impact and scope of reported vulnerabilities and having processes in place to ensure that all these issues are then resolved in a timely manner to prevent bigger problems arising further down the road. As organizations push ahead with using APIs to power up digital transformation and deploy a new generation app-based services, so the risk of unauthorized access and data exposure is growing.


AI Liability Risks to Consider

Most AI systems are not autonomous. They provide results, they make recommendations, but if they're going to make automatic decisions that could negatively impact certain individuals or groups (e.g., protected classes), then not only should a human be in the loop, but a group of individuals who can help identify the potential risks early on such as people from legal, compliance, risk management, privacy, etc. ... It states, "The data subject shall have the right not to be subject to a decision based solely on automated processing, including profile, which produces legal effects concerning him or her similarly significantly affects him or her." While there are a few exceptions, such as getting the user's express consent or complying with other laws EU members may have, it's important to have guardrails that minimize the potential for lawsuits, regulatory fines and other risks. "You have people believing what is told to them by the marketing of a tool and they're not performing due diligence to determine whether the tool actually works," said Devika Kornbacher, a partner at law firm Vinson & Elkins. "Do a pilot first and get a pool of people to help you test the veracity of the AI output – data science, legal, users or whoever should know what the output should be."


Digital transformation: 3 priorities for CIOs facing a tough climb

Leading a successful digital transformation is like leading a mountain climbing expedition: It takes courage, leadership, and perseverance. Consider these tips from a leader who's done both ... Imagine boiling the ocean in one day. That’s how digital transformation feels sometimes. The psychological impact becomes unbearable and overwhelming. By preparing and staying the course, however, digital transformation becomes an achievable feat with lasting outcomes. In the case of our climb, preparing meant wearing the right clothes, packing the right things, communicating with each other, trusting one another, fuelling ourselves with energy bars, breaking down the path into smaller chunks, and learning about the road ahead. As a leader, I ventured to turn our performance up that mountain from mediocre to exceptional. In digital transformation, this may mean upskilling the workforce and adopting new platforms. ... Climbing Mount Hood was precarious and mentally and physically difficult. I never wavered. I stuck to our goal because I knew the outcome would benefit everyone in my family. To soldier on, you must be that persistent.


How to Secure Your Cryptocurrency Wallet

Owners of Bitcoin, Ethereum, and other cryptocurrency typically trade on centralized platforms such as Robinhood, Coinbase, FTX, and others. They don't need to worry about creating and managing digital wallets since the platform handles those tasks. That's the convenience of a centralized platform. However, there are serious drawbacks to keeping your crypto assets on a platform. If the platform gets hacked, or your account credentials are stolen, or the government decides to seize your digital assets, you could lose all of your crypto investments. If you would rather not rely on these platforms to secure your digital assets and prefer not to be subject to their policies, it's better to move your digital assets off of the platform and to where you can have full control. Centralized platforms are the on-ramps to purchase digital assets with dollars. Once you make the purchase, you can take custody of your assets by transferring them to your wallet. Decentralized applications (dapp), on the other hand, require users to hold funds in their own wallet. Decentralized finance (DeFi) – such as lending, borrowing, insurance – requires using a digital wallet. DeFi is only slowly becoming available to users of centralized platforms.


How to Work Better Together: Building DEI Awareness in Tech

Increasingly, we also gatekeep on existing experience. By that I mean the problem that those new to our industry experience when they need to "get experience to get experience". This happens when entry level roles already require some number of years of experience as a condition of hire. Without "year 0" opportunities, then the only people in the available job pool will be people already behind the gate and that number will decrease over time as people change industries, retire, or even want to go on holidays or sabbaticals. Perception of what success looks like is also a major barrier to success. A great example is the previous section, where I outlined groups of people who are not normally included in dress code; not normally actively, but rather invisibly due to lack of representation or lack of awareness of those currently in the majority. A way to start self testing for this is to see what comes to mind when I say "successful engineer", "manager", or "CEO". Specially: what do the people in those roles look like and sound like, by default, in your mind’s eye?


Australia Says Uber 'Interfered' With Users' Privacy

The OAIC action comes almost five years after Uber's systems were infiltrated by attackers who stole user data. Uber's cover up of the incident spurred outrage, inquiries and action by several regulators worldwide. Two attackers obtained login credentials from a private GitHub site that was used by some of Uber's engineers. They then used those login credentials to access an Amazon Web Services account that had an archive with rider and driver information. All told, there were 57 million accounts exposed. The data affected included names, email addresses and phone numbers for Uber customers as well as personal information of 7 million drivers and 600,000 driver's license numbers. Uber paid $100,000 in bitcoin to the two attackers and positioned the payment as a bug bounty. Uber did not reveal the breach until more than a year later in November 2017. Shortly after that disclosure, Uber fired Joe Sullivan, its CSO. Sullivan, who is now CSO for Cloudflare, was charged in the U.S. with obstruction of justice and misprision, which is the deliberate concealment of a felony or treasonable act.


CISO Interview Series: How Aiming for the Sky Can Help Keep Your Organization Secure

Visibility is key to understanding your landscape, to understanding what ‘your organizational landscape’ and world looks like. The capability I would invest in is looking at your cyber risk profile, ensuring that you understand your risks. If you understand your risks, then you can help translate that across the business. Or it doesn’t need to be translated. It’s already done for you because you’ve got it in a risk profile that the business understands because the business will essentially dictate that.
Once you understand your risk profile, that gives you actions you can work towards. Even if you’re using a risk framework, without a good risk assessment, you can be working on stuff that doesn’t really add value or isn’t a problem. Understanding your landscape is what gives the visibility. Focus on your basics and get your policies and processes in place so that there is structure that everyone can work from. As an example, we work to four area: governance, risk, and compliance; security operations center; secure architecture; and secure infrastructure. They acre the four pillars we align to. What that means is your secure infrastructure is critical.


Health Care’s Digital Transformation: Three Trends To Watch For

A shift is happening within our health care system that is allowing more and more data to enter the health system. According to Capital Markets, 30% of the world’s data volume is being generated by the health care industry, and by 2025, the compound annual growth rate of data for health care will reach 36%. Health care organizations must develop a plan to manage this data and integrate it with SDoH data, AI-fueled behavioral science, patient history and more to facilitate a more proactive approach to care. Value-based care — a buzzword for years now that emphasizes preventative care — may finally be within reach if health care leaders are able to harness this data and integrate it into clinical workflows. Like the health care system itself, these topics are interwoven and complex. Overcoming these challenges will require hard work and dedication from the entire health care industry, but I am confident we are making incredible strides. We’re seeing cloud adoption that would have been unimaginable just 18 months ago. 


Re-focusing your tech strategy post-Covid

Too often businesses forget about the importance of measuring these KPIs long-term – in fact, research carried out last year by AppLearn found that just 12 per cent of organisations measure the success of their technology investments after one year, falling to five per cent after three years. When you consider the time and money ploughed into software roll outs, these stats are shocking. But there’s also the fact that software evolves and the way users interact with it can change, especially with major updates – this makes assessing the performance and value of investments beyond the first few years of implementation just as important. In the age of the digital workplace, data is king and will give business leaders greater insights into the technologies used and the end-to-end employee experience. To maintain productivity in the long-term, you must move beyond surface level vanity metrics and gather intelligent data points – this could be time spent navigating tasks within applications, task error/completion rates, what pages users have visited or where they’ve looked for support.



Quote for the day:

"We are reluctant to let go of the belief that if I am to care for something I must control it." -- Peter Block