Daily Tech Digest - August 02, 2021

Power With Purpose: The Four Pillars Of Leadership

A leader is defined by a purpose that is bigger than themselves. When that purpose serves a greater good, it becomes the platform for great leadership. Gandhi summed up his philosophy of life with these words: “My life is my message.” That one statement speaks volumes about how he chose to live his life and share his message of non-violence, compassion, and truth with the world. When you have a purpose that goes beyond you, people will see it and identify with it. Being purpose-driven defines the nobility of one’s character. It inspires others. At its core, your leadership purpose springs from your identity, the essence of who you are. Purpose is the difference between a salesman and a leader, and in the end, the leader is the one that makes the impact on the world. ... The earmark of a great leader is their care and concern for their people. Displaying compassion towards others is not about a photo-op, but an inherent characteristic that others can feel and hear when they are with you. It lives in the warmth and timbre of your voice. It shows in every action you take. Caring leaders take a genuine interest in others. 


Using GPUs for Data Science and Data Analytics

It is now well established that the modern AI/ML systems’ success has been critically dependent on their ability to process massive amounts of raw data in a parallel fashion using task-optimized hardware. Therefore, the use of specialized hardware like Graphics Processing Units (GPUs) played a significant role in this early success. Since then, a lot of emphasis has been given to building highly optimized software tools and customized mathematical processing engines (both hardware and software) to leverage the power and architecture of GPUs and parallel computing. While the use of GPUs and distributed computing is widely discussed in the academic and business circles for core AI/ML tasks (e.g. running a 100-layer deep neural network for image classification or billion-parameter BERT speech synthesis model), they find less coverage when it comes to their utility for regular data science and data engineering tasks. These data-related tasks are the essential precursor to any ML workload in an AI pipeline and they often constitute a majority percentage of the time and intellectual effort spent by a data scientist or even an ML engineer.


The Use of Deep Learning across the Marketing Funnel

Simply put, Deep Learning is an ML technique where very large Neural networks are used to learn from the large quantum of data and deliver highly accurate outcomes. The more the data, the better the Deep Learning model learns and the more accurate the outcome. Deep Learning is at the centre of exciting innovation possibilities like Self Driven Cars, Image recognition, virtual assistants, instant audio translations etc. The ability to manage both structured and unstructured data makes this a truly powerful technology advancement. ... Differentiation not only comes from product proposition and comms but also how consumers experience the brand/service online. And here too strides in Deep Learning are enabling marketers with more sophisticated ways to create differentiation. Website Experience: Based on the consumer profile and cohort, even the website experience can be customized to ensure that a customer gets a truly relevant experience creating more affinity for the brand/service. A great example of this is Netflix where no 2 users have a similar website experience based on their past viewing of content.


Navigating the 2021 threat landscape: Security operations, cybersecurity maturity

When it comes to cybersecurity teams and leadership, the report findings revealed no strong differences between the security function having a CISO or CIO at the helm and organizational views on increased or decreased cyberattacks, confidence levels related to detecting and responding to cyberthreats or perceptions on cybercrime reporting. However, it did find that security function ownership is related to differences regarding executive valuation of cyberrisk assessments (84 percent under CISOs versus 78 percent under CIOs), board of director prioritization of cybersecurity (61% under CISOs versus 47% under CIOs) and alignment of cybersecurity strategy with organizational objectives (77% under CISOs versus 68% under CIOs). The report also found that artificial intelligence (AI) is fully operational in a third of the security operations of respondents, representing a four percent increase from the year before. Seventy-seven percent of respondents also revealed they are confident in the ability of their cybersecurity teams to detect and respond to cyberthreats, a three-percentage point increase from last year.


Don’t become an Enterprise/IT Architect…

The world is somewhere on the rapid growth part of the S-curve of the information revolution. It is at the point that the S-curve is going into maturity that speed of change slows down. It is at that point that the gap between expectation of change capacity by upper management and the reality of the actual capacity for change is going to increase. And Enterprise/IT Architects–Strategists are amongst other things tasked with bridging that worsening gap. Which means that — for Enterprise/IT Architects–Strategists and many more people who actually are active in shaping that digital landscape — as long as there is no true engagement of top management, the gap between them and their upper management looks like this red curve, which incidentally also represents how enjoyable/frustrating EA-like jobs are ... We’re in the period of rapid growth, the middle of the blue curve. That is also the period where (IT-related, which is much) change inside organisations (and society) gets more difficult every day, and thus noticeably slows down. Well run and successful projects that take more than five years are no exception. 


A Journey in Test Engineering Leadership: Applying Session-Based Test Management

Testing is a complex activity, just like software engineering or any craft that takes study, critical thinking and commitment. It is not possible to encode everything that happens during testing into a document or artifact such as a test case. The best we can do is report our testing in a way that tells a compelling and informative story about risk to people who matter, i.e. those making the decisions about the product under test. ... SBTM is a kind of activity-based test management method, which we organize around test sessions. The method focuses on the activities testers perform during testing. There are many activities that testers perform outside of testing, such as attend meetings, help developers troubleshoot problems, attend training, and so on. Those activities don’t belong in a test session. To have an accurate picture of only the testing performed and the duration of the testing effort, we package test activity into sessions.


Beware of blind spots in data centre monitoring

The answer is to combine the tools that tell you about the past and present states, with a tool that will shine a light on how the environment will behave in the future. Doing this requires the use of Computational Fluid Dynamics (CFD). A virtual simulation of an entire data center, CFD-based simulation enables operators to accurately calculate the environmental conditions of the facility. Virtual sensors, for instance, ensure the simulated data reflects the sensor data. Consequently, the results can be used to investigate conditions anywhere you want, in fine detail. CFD also extends beyond temperature maps and includes humidity, pressure and air speed. Airflow, for instance, can be traced to show how it is getting from one place to another, offering unparalleled insight into the cause of thermal challenges. Critically, CFD enables operators to simulate future resilience. A validated CFD model will offer information about any configuration of your data centre, simulating variations in current configurations, or in new ones you haven’t yet deployed.


SolarWinds CEO Talks Securing IT in the Wake of Sunburst

Specific to the pandemic, a lot of technologies, endpoint security, cloud security, and zero trust, which have proliferated after the pandemic -- organizations have changed how they talk about how they are deploying these. Previously there may have been a cloud security team and an infrastructure security team, very soon the line started getting blurred. There was very little need for network security because not many people were coming to work. It had to be changed in terms of organization, prioritization, and collaboration within the enterprise to leverage technology to support this kind of workforce. ... Every team has to be constantly vigilant about what might be happening in their environment and who could be attacking them. The other side of it is constant learning. You constantly demonstrate awareness and vigilance and constantly learn from it. The red team can be a very effective way to train an entire organization and sensitize them to let’s say a phishing attack. As common as phishing attacks are, a large majority of people, including in the technology sectors, do not know how to fully prevent them despite the fact there are lot of phishing [detection] technology tools available.


Is your network AI as smart as you think?

The challenge comes when we stop looking at collections as independent elements and start looking at networks as collections of collections. A network isn’t an anthill, it’s the whole ecosystem the anthill is inside of including trees and cows and many other things. Trees know how to be trees, cows understand the essence of cow-ness, but what understands the ecosystem? A farm is a farm, not some arbitrary combination of trees, cows, and anthills. The person who knows what a farm is supposed to be is the farmer, not the elements of the farm or the supplier of those elements, and in your network, dear network-operations type, that farmer is you. In the early days, the developers of AI explicitly acknowledged the separation between the knowledge engineer who built the AI framework and the subject-matter expert whose knowledge shaped the framework. In software, especially DevOps, the management tools aim to achieve a goal state, which in our farm analogy, describes where cows, trees, and ants fit in. If the current state isn’t the goal state, they do stuff or move stuff around to converge on the goal. It’s a great concept, but for it to work we have to know what the goal is. 


Milvus 2.0: Redefining Vector Database

The microservice design of Milvus 2.0, which features read and write separation, incremental and historical data separation, and CPU-intensive, memory-intensive, and IO-intensive task separation. Microservices help optimize the allocation of resources for the ever-changing heterogeneous workload. In Milvus 2.0, the log broker serves as the system's backbone: All data insert and update operations must go through the log broker, and worker nodes execute CRUD operations by subscribing to and consuming logs. This design reduces system complexity by moving core functions such as data persistence and flashback down to the storage layer, and log pub-sub make the system even more flexible and better positioned for future scaling. Milvus 2.0 implements the unified Lambda architecture, which integrates the processing of the incremental and historical data. Compared with the Kappa architecture, Milvus 2.0 introduces log backfill, which stores log snapshots and indexes in the object storage to improve failure recovery efficiency and query performance.



Quote for the day:

"Expression is saying what you wish to say, Impression is saying what others wish to listen." -- Krishna Sagar

Daily Tech Digest - August 01, 2021

For tech firms, the risk of not preparing for leadership changes is huge

Tech execs should be more rigorous about succession planning for one important reason: institutional memory. Tech firms generally are younger than other companies of a similar size, which partly explains why the median age of S&P 500 companies plunged to 33 years in 2018 from 85 years in 2000, according to McKinsey & Co. These enterprises clearly have accomplished a lot in their short lives, but in their haste, most have not captured their history, unlike their longer-lived peers in other sectors. Less than half of these tech firms, in fact, have formally recorded their leader’s story for posterity. That puts them at a disadvantage when, inevitably, they will be required to onboard newcomers to their C-suites. It’s best to record this history well before the intense swirl of a leadership transition begins. Crucially, it will help the incoming and future generations of leadership understand critical aspects of its track record, the lessons learned, culture and identity. It also explains why the organization has evolved as it has, what binds people together and what may trigger resistance based on previous experience. It’s as much about moving forward as looking back.


The importance of having accountability in AI ethics

In recent years, the EU has made conscious steps towards addressing some of these issues, laying the groundwork for proper regulation for the technology. Its most recent proposals revealed plans to classify different AI applications depending on their risks. Restrictions are set to be introduced on uses of the technology that are identified as high-risk, with potential fines for violations. Fines could be up to 6pc of global turnover or €30m, depending on which is higher. But policing AI systems can be a complicated arena. Joanna J Bryson is professor of ethics and technology at the Hertie School of Governance in Berlin, whose research focuses on the impact of technology on human cooperation as well as AI and ICT governance. She is also a speaker at EmTech Europe 2021, which is currently taking place in Belfast as well as online. Bryson holds degrees in psychology and artificial intelligence from the University of Chicago, the University of Edinburgh and MIT. It was during her time at MIT in the 90s that she really started to pick up on the ethics around AI.


Data Platform: Data Ingestion Engine for Data Lake

When we design and build a Data Platform, we always need to evaluate if automation provides enough value to compensate the team effort and time. Time is the only resource that we can not scale. We can increase the team, but the relationship between people and productivity is not direct. Sometimes when a team is very focused on the automation paradigm, people want to automate everything, even actions that only require one time or do not provide real value. ... Usually, this is not an easy decision, and it has to be evaluated by all the team. In the end, it is an ROI decision. I don't like this concept very much because it often focuses on economic costs and forgets about people and teams. Before starting any design and development, we have to analyze if there are tools available to cover our needs. As software engineers, we often want to develop our software. But, from a team or product view, we should focus our efforts on the most valuable components and features. The goal of the Data Ingestion Engine is to make it easier the data ingestion from the data source into our Data Platform providing a standard, resilient and automated ingestion layer.


Beyond OAuth? GNAP for Next Generation Authentication

With GNAP, a client can ask for multiple access tokens in one grant request (vs. multiple requests). For instance, you could request read privileges on one resource and read and write privileges on another. ... In GNAP, the requesting client declares what kinds of interactions it supports. The authorization server responds to the request with an interaction to be used to communicate with the resource owner or the resource client. These interactions are defined in the GNAP spec as first-class objects, which provides extension points for future communication. Interactions may include redirecting the browser, opening a deeplink URL in a mobile application or providing a user code to be used elsewhere. ... GNAP provides a grant identifier if the authorization server determines a grant can be continued, unlike OAuth2. In the sample below, the grant identifier, access_token.value, can be presented to the authorization server if the grant needs to be modified or continued after the initial request.


The Future Of Work Will Demand These 8 New Skills

Closely related to entrepreneurship is resilience. Humans are nothing if not adaptable but embracing shifts and bouncing forward (rather than back) will require new competencies. The skill of resilience requires you to 1) stay aware of new information 2) make sense of it 3) reinvent, innovate and solve problems. Finding fresh approaches and flexing based on your insights will be fundamental to success. ... Inherent to moving forward, is the ability to believe in a positive future and focus on possibilities. When experts find fault with a lack of responsiveness, it’s often the result of a lack of imagination. The skills of being able to envision and foresee what might happen are critical to staying motivated, inspired and driven to create new beginnings. ... Success has always been about your network, but achievement in the future will depend even more on the strength of relationships. Your social capital and primary, secondary and tertiary relationships will be critical netting to offer you new learning, access to new opportunities and social support. The new skill will be the ability to build rapport—and to build it quickly and it from a distance.


Will Artificial Intelligence Be the End of Web Design & Development

Whilst there has been plenty of hype in recent years around the impact AI will have to the website design and development community, the reality is that Artificial (Design) Intelligence technology is still very-much in its infancy …and there’s a long way to go before we see web designers and developers being replaced by robots. AI-powered platforms and tools are actually making digital creatives and engineers more productive and more effective, allowing them to produce higher-quality, digital experiences at a lower-cost. The concept behind using Artificial Intelligence to create websites is quite simple: AI-powered code-completion tools are used to “make” a website on its own and then machine learning is leveraged to optimize the user interface – entirely through adaptive intelligence, with minimal human intervention. ... The power of human creativity brings with it an innate curiosity; we are always looking to challenge the status-quo and experiment with new forms and aesthetics. Creativity will always be a human endeavor. 


Intelligent ERP: What It Takes To Thrive In A World Of Big Data

While challenging, this requirement led to an innovation that helped the payment services provider optimize its financial operations and better understand and expand its business. ZPS collaborated with the University of Seville in Spain to build a customized cash-flow model to uncover valuable liquidity and financial planning insights. Within this guarantee-monitoring model, ZPS uses Intelligent ERP to replicate data on contract accounts receivable in near-real time to a business warehousing solution and other reporting applications. An in-memory database then processes the data, calculates key figures such as customer cash-in and factoring cash-outs, and uses these figures to determine the amounts to be guaranteed each day. Furthermore, with a live connection to its business warehousing solution, ZPS uses a cloud-based analytics solution to let employees access calculated data and consume reports through intuitive dashboards and predictive stories. By amplifying the value of its Big Data with Intelligent ERP and augmented analytics, ZPS allows a larger circle of business users to gain insights into financial KPIs, such as gross customer cash-ins or days from order. 


Is McKinsey wrong about the financial benefits of diversity?

The authors emphasize that this isn’t definitive proof that there is no connection between racial and ethnic diversity and profits—more research is needed on that front. They also note several other important caveats, including that S&P 500 companies are not a random sample of public US firms, and that their method of identifying race and ethnicity among executives (using faces and names) is likely to overestimate the number of white executives. But they criticize McKinsey’s methodology, including its metric for measuring diversity among executives. They conclude that “caution is warranted in relying on McKinsey’s findings to support the view that US publicly traded firms can deliver improved financial performance if they increase the racial/ethnic diversity of their executives.” Among the additional research that Green and Hand call for is a way to better examine whether there is any causal relationship between a firm’s diversity and its financial performance. McKinsey, by its own admission, is only looking at correlation. 


Data scientists continue to be the sexiest hires around

With the value of data science clear in the potential of these industries, there is no reason to believe data science will be anything but a growing profession for years and years to come. AI adoption alone has skyrocketed in recent years. Now, half of all surveyed organizations say they have applied AI to fulfill at least one function, with many more intending to invest in data-driven solutions. As the accessibility and power of data become more common, so too does the need for data scientists. Now, data scientists must help businesses navigate a world of global data collection and applications. From securing business processes to meeting international data security standards to connecting new and vital patterns in business trends, data scientists are vital to the success of innumerable businesses across industries. One such measure they can be part of is setting global data security standards for various industries. Data science is still one of the sexiest jobs you can have because it increasingly means helping people and saving money. 


Stanford Researchers Put Deep Learning On A Data Diet

With the cost for deep learning model training on the rise, individual researchers and small organisations are settling for pre-trained models. Today, the likes of Google or Microsoft have budgets (read:millions of dollars) for training state of the art language models. Meanwhile, efforts are underway to make the whole paradigm of training less daunting for everyone. Researchers are actively exploring ways to maximise training efficiency to make models run faster and use less memory. A common practice is to train small models until they converge and then run a compression technique lightly. Techniques like parameter pruning have already become popular for reducing redundancies without sacrificing accuracy. In pruning, redundancies in the model parameters are explored, and the uncritical yet redundant ones are removed. Identifying important training data plays a role in online and active learning. But how much of the data is superfluous? ... For instance, the capabilities of computer vision systems have improved greatly due to (a) deeper models with high complexity, (b) increased computational power and (c) availability of large-scale labeled data. 



Quote for the day:

"Successful leadership requires positive self-regard fused with optimism about a desired outcome." -- Warren Bennis

Daily Tech Digest - July 31, 2021

5 Cybersecurity Tactics To Protect The Cloud

Best practices to protect companies’ operations in the cloud are guided by three fundamental questions. First, who is managing the cloud? Many companies are moving towards a Managed Service Provider (MSP) model that includes the monitoring and management of security devices and systems called Managed Security Service Provider (MSSP). At a basic level, security services offered include managed firewall, intrusion detection, virtual private network, vulnerability scanning and anti-malware services, among others. Second, what is the responsibility shift in this model? There is always a shared responsibility between companies and their cloud infrastructure providers for managing the cloud. This applies to private, public, and hybrid cloud models. Typically, cloud providers are responsible for the infrastructure as a service (IaaS) and platform as a service (PaaS) layers while companies take charge of the application layer. Companies are ultimately responsible for deciding the user management concept for business applications, such as the user identity governance for human resources and finance applications.


Yugabyte CTO outlines a PostgreSQL path to distributed cloud

30 years ago, open source [databases were not] the norm. If you told people, “Hey, here’s an open source database,” they’re going to say, “Okay? What does that mean? What is it? What does it really mean? And why should I be excited?” And so on. I remember because at Facebook I was a part of the team that built an open source database called Cassandra, and we had no idea what would happen. We thought “Okay, here’s this thing that we’re putting out in the open source, and let’s see what happens.” And this is in 2007. Back in that day, it was important to use a restrictive license — like GPL — to encourage people to contribute and not just take stuff from the open source and never give back. So that’s the reason why a lot of projects ended up with GPL-like licenses. Now, MySQL did a really good job in adhering to these workloads that came in the web back then. They were tier two workloads initially. These were not super critical, but over time they became very critical, and the MySQL community aligned really well and that gave them their speed. But over time, as you know, open source has become a staple. And most infrastructure pieces are starting to become open source.


Introducing MVISION Cloud Firewall – Delivering Protection Across All Ports and Protocols

McAfee MVISION Cloud Firewall is a cutting-edge Firewall-as-a-Service solution that enforces centralized security policies for protecting the distributed workforce across all locations, for all ports and protocols. MVISION Cloud Firewall allows organizations to extend comprehensive firewall capabilities to remote sites and remote workers through a cloud-delivered service model, securing data and users across headquarters, branch offices, home networks and mobile networks, with real-time visibility and control over the entire network traffic. The core value proposition of MVISION Cloud Firewall is characterized by a next-generation intrusion detection and prevention system that utilizes advanced detection and emulation techniques to defend against stealthy threats and malware attacks with industry best efficacy. A sophisticated next-generation firewall application control system enables organizations to make informed decisions about allowing or blocking applications by correlating threat activities with application awareness, including Layer 7 visibility of more than 2000 applications and protocols.


How Data Governance Improves Customer Experience

Customer journey orchestration allows an organization to meaningfully modify and personalize a customer’s experience in real-time by pulling in data from many sources to make intelligent decisions about what options and offers to provide. While this sounds like a best-case scenario for customers and company alike, it requires data sources to be unified and integrated across channels and environments. This is where good data governance comes into play. Even though many automation tasks may fall in a specific department like marketing or customer service, the data needed to personalize and optimize any of those experiences is often coming from platforms and teams that span the entire organization. Good data governance helps to unify all of these sources, processes and systems and ensures customers receive accurate and impactful personalization within a wide range of experiences. As you can see, data governance can have a major influence over how the customer experience is delivered, measured and enhanced. It can help teams work better together and help customers get more personalized service.


The Life Cycle of a Breached Database

Our continued reliance on passwords for authentication has contributed to one toxic data spill or hack after another. One might even say passwords are the fossil fuels powering most IT modernization: They’re ubiquitous because they are cheap and easy to use, but that means they also come with significant trade-offs — such as polluting the Internet with weaponized data when they’re leaked or stolen en masse. When a website’s user database gets compromised, that information invariably turns up on hacker forums. There, denizens with computer rigs that are built primarily for mining virtual currencies can set to work using those systems to crack passwords. How successful this password cracking is depends a great deal on the length of one’s password and the type of password hashing algorithm the victim website uses to obfuscate user passwords. But a decent crypto-mining rig can quickly crack a majority of password hashes generated with MD5 (one of the weaker and more commonly-used password hashing algorithms).


Zero Trust Adoption Report: How does your organization compare?

From the wide adoption of cloud-based services to the proliferation of mobile devices. From the emergence of advanced new cyberthreats to the recent sudden shift to remote work. The last decade has been full of disruptions that have required organizations to adapt and accelerate their security transformation. And as we look forward to the next major disruption—the move to hybrid work—one thing is clear: the pace of change isn’t slowing down. In the face of this rapid change, Zero Trust has risen as a guiding cybersecurity strategy for organizations around the globe. A Zero Trust security model assumes breach and explicitly verifies the security status of identity, endpoint, network, and other resources based on all available signals and data. It relies on contextual real-time policy enforcement to achieve least privileged access and minimize risks. Automation and machine learning are used to enable rapid detection, prevention, and remediation of attacks using behavior analytics and large datasets.


Container Technology Complexity Drives Kubernetes as a Service

The reason why managed Kubernetes is now gaining traction is obvious, according to Brian Gracely, senior director product strategy for Red Hat OpenShift. He pointed out that containers and Kubernetes are relatively new technologies, and that managed Kubernetes services are even newer. This means that until recently, companies that wanted or needed to deploy containers and use Kubernetes had no choice but to invest their own resources in developing in-house expertise. "Any time we go through these new technologies, it's early adopters that live through the shortfalls of it, or the lack of features or complexity of it, because they have an immediate problem they're trying to solve," he said. Like Galabov, Graceley thinks that part of the move towards Kubernetes as a Service is motivated by the fact that many enterprises are already leveraging managed services elsewhere in their infrastructure, so doing the same with their container deployments only makes sense. "If my compute is managed, my network is managed and my storage is managed, and we're going to use Kubernetes, my natural inclination is to say, 'Is there a managed version of Kubernetes' as opposed to saying, 'I'll just run software on top of the cloud.'" he said. "That's sort of a normal trend."


NIST calls for help in developing framework managing risks of AI

"While it may be impossible to eliminate the risks inherent in AI, we are developing this guidance framework through a consensus-driven, collaborative process that we hope will encourage its wide adoption, thereby minimizing these risks," Tabassi said. NIST noted that the development and use of new AI-based technologies, products and services bring "technical and societal challenges and risks." "NIST is soliciting input to understand how organizations and individuals involved with developing and using AI systems might be able to address the full scope of AI risk and how a framework for managing these risks might be constructed," NIST said in a statement. NIST is specifically looking for information about the greatest challenges developers face in improving the management of AI-related risks. NIST is also interested in understanding how organizations currently define and manage characteristics of AI trustworthiness. The organization is similarly looking for input about the extent to which AI risks are incorporated into organizations' overarching risk management, particularly around cybersecurity, privacy and safety.


Studies show cybersecurity skills gap is widening as the cost of breaches rises

The worsening skills shortage comes as companies are adopting breach-prone remote work arrangements in light of the pandemic. In its report today, IBM found that the shift to remote work led to more expensive data breaches, with breaches costing over $1 million more on average when remote work was indicated as a factor in the event. By industry, data breaches in health care were most expensive at $9.23 million, followed by the financial sector ($5.72 million) and pharmaceuticals ($5.04 million). While lower in overall costs, retail, media, hospitality, and the public sector experienced a large increase in costs versus the prior year. “Compromised user credentials were most common root cause of data breaches,” IBM reported. “At the same time, customer personal data like names, emails, and passwords was the most common type of information leaked — a dangerous combination that could provide attackers with leverage for future breaches.” IBM says that it found that “modern” security approaches reduced expenses, with AI, security analytics, and encryption being the top three mitigating factors.


Exploring BERT Language Framework for NLP Tasks

An open-source machine learning framework, BERT, or bidirectional encoder representation from a transformer is used for training the baseline model of NLP for streamlining the NLP tasks further. This framework is used for language modeling tasks and is pre-trained on unlabelled data. BERT is particularly useful for neural network-based NLP models, which make use of left and right layers to form relations to move to the next step. BERT is based on Transformer, a path-breaking model developed and adopted in 2017 to identify important words to predict the next word in a sentence of a language. Toppling the earlier NLP frameworks which were limited to smaller data sets, the Transformer could establish larger contexts and handle issues related to the ambiguity of the text. Following this, the BERT framework performs exceptionally on deep learning-based NLP tasks. BERT enables the NLP model to understand the semantic meaning of a sentence – The market valuation of XX firm stands at XX%, by reading bi-directionally (right to left and left to right) and helps in predicting the next sentence.



Quote for the day:

“Whenever you find yourself on the side of the majority, it is time to pause and reflect.” -- Mark Twain

Daily Tech Digest - July 30, 2021

Five steps towards cloud migration for a remote workforce

Reducing costs is one of the key reasons many businesses move to the cloud, with a Microsoft survey identifying this as a top benefit of cloud migration. However, the cost of the migration project itself also needs to be taken into consideration. Some businesses will undertake this exercise in-house if they have an IT team that is big and experienced enough to take on the project or to keep costs low. But if your internal IT support team is small or you already take out managed IT services, we recommend utilising a third-party provider. A business with expertise in cloud consultancy will manage the entire process for you and ensure that your migration goes as smoothly as possible. Their extensive experience in deploying cloud solutions and cloud migrations means you’ll experience a smoother journey to cloud computing. While carrying out this project in-house may seem more cost-effective on the face of it, cloud experts will help you to reduce costs by considering every possibility and mitigating any potential risks. Moving workloads to the cloud is an essential step for businesses that are looking to reduce IT operating costs, increase security, and improve efficiency and productivity.


Cloud Security Basics CIOs and CTOs Should Know

Cloud environments have proven not to be inherently secure (as originally assumed). For the past several years, there have been active debates about whether cloud is more or less secure than a data center, particularly as companies move further into the cloud. Highly regulated companies tend to control their most sensitive data and assets from within their data centers and have moved less-critical data and workloads to cloud. On the flip side Amazon, Google, and Microsoft spend considerably more on security than the average enterprise, and for that reason, some believe cloud environments more secure than on-premises data centers. "AWS, Microsoft, and Google are creators of infrastructure and application deployment platforms. They're not security companies," said Richard Bird, chief customer information officer at multi-cloud identify solution provider Ping Identity. "The Verizon Database Incident Report says about 30% of all breaches are facilitated by human error. That same 30% applies to AWS, Microsoft, and Google. [Cloud] cost reductions don't come with a corresponding decrease in risk."


How To Defend Yourself Against The Powerful New NSO Spyware Attacks

Unlike infection attempts which require that the target perform some action like clicking a link or opening an attachment, zero-click exploits are so called because they require no interaction from the target. All that is required is for the targeted person to have a particular vulnerable app or operating system installed. Amnesty International’s forensic report on the recently revealed Pegasus evidence states that some infections were transmitted through zero-click attacks leveraging the Apple Music and iMessage apps. This is not the first time NSO Group’s tools have been linked to zero-click attacks. A 2017 complaint against Panama’s former President Ricardo Martinelli states that journalists, political figures, union activists, and civic association leaders were targeted with Pegasus and rogue push notifications delivered to their devices, while in 2019 WhatsApp and Facebook filed a complaint claiming NSO Group developed malware capable of exploiting a zero-click vulnerability in WhatsApp. As zero-click vulnerabilities by definition do not require any user interaction, they are the hardest to defend against.


Distributed DevOps Teams: Enabling Non-Stop Delivery

An important element of most DevOps teams is cultural integration; learning about and from each other, establishing the psychological safety within the team to fail in front of your peers, the proverbial finishing of each other’s sentences… it’s simply harder to establish this level of cultural cohesiveness when you are working in distributed teams. Leaders are also challenged; how do they recognize when a team member needs help, needs to be prompted, or requires clearer direction without the body language cues or without any interaction at all, if they are in completely different time zones? As a leader, recognizing when to intervene, when to support, and when to engage is challenging when the team is delivering outside of view. Trust becomes crucial between all team members. This particular organization is currently considering "time zone rotation" so that team members can establish working relationships and trust outside of their own normal working time group.


Building A Secure Cloud Infrastructure For Strong Data Protection

Sometimes the terms “security” and “privacy” are used interchangeably, but it is vital to understand the nuances between the two when building a secure cloud infrastructure. Data privacy is associated with ensuring that personally identifiable information (PII) stored in the cloud is hidden. Privacy regulations, such as the EU’s GDPR and the California Consumer Privacy Act (CCPA), dictate what data is considered private and that the data remains pseudonymized at all times. Data security, on the other hand, pertains to specific protections that have to be built into the infrastructure to prevent data from being stolen. Building a secure cloud infrastructure is predicated upon understanding the right mix of privacy and security measures, which can vary based on an organization’s industry and the specific regulations to which it must adhere. Many organizations aren’t clear on how to protect data in the cloud. The natural assumption is that the cloud provider will handle security, but that is not the case. When migrating to the cloud, most providers lay out a shared responsibility model for protection, meaning the provider is responsible for specific security areas and the company is responsible for others.


7 Best Soft Skills That Make a Great Software Developer

Everyone can talk, but not everyone can communicate. Being a software developer means understanding a whole new language: the language of code, with all the acronyms and technical terms that come with it. These terms may seem simple to you, but will all your colleagues understand them Work on your communication skills by considering carefully the language you use and tailoring it to your audience. Could you explain agile software testing to a computing novice, for example? By honing your communication soft skills you can reach out to more people. These first two soft skills go hand in hand: to be a great communicator, you also have to be a great listener. Remember that everyone you work with and speak to deserves to be listened to, and they may have information that will make your job easier. Put distractions to one side, and concentrate completely on the person who’s talking to you. Keep an eye out for non-verbal communication signs too, as they can often reveal as much as what a person is saying.


McAfee: Babuk ransomware decryptor causes encryption 'beyond repair'

"It seems that Babuk has adopted live beta testing on its victims when it comes to its Golang binary and decryptor development. We have seen several victims' machines encrypted beyond repair due to either a faulty binary or a faulty decryptor," Seret and Keijzer said. "Even if a victim gave in to the demands and was forced to pay the ransom, they still could not get their files back. We strongly hope that the bad coding also affects Babuk's relationship with its affiliates. The affiliates perform the actual compromise and are now faced with a victim who cannot get their data back even if they pay. This essentially changes the crime dynamic from extortion to destruction, which is much less profitable from a criminal's point of view." The typical Babuk attack features three distinct phases: initial access, network propagation, and action on objectives. Babuk also operated a ransomware-as-a-service model before shutting down in April. Northwave investigated a Babuk attack that was perpetrated through the CVE-2021-27065 vulnerability also being exploited by the HAFNIUM threat actor.


Cisco preps now for the hybrid workforce

The lasting impact of remote work is resulting in a reassessment of the IT infrastructure that shifts buyer requirements to demand work-anywhere capabilities, said Ranjit Atwal, senior research director at Gartner. “Through 2024, organizations will be forced to bring forward digital business transformation plans by at least five years,” Atwal said. “Those plans will have to adapt to a post-COVID-19 world that involves permanently higher adoption of remote work and digital touchpoints,” Digital products and services will play a big role in these digital transformation efforts, Atwal stated. “This longer strategic plan requires continued investment in strategic remote-first technology continuity implementations along with new technologies such as hyperautomation, AI and collaboration technologies to open up more flexibility of location choice in job roles,” Atwal stated. The hybrid workforce will need every technology from SD-WAN and SASE to a full stack collaboration suite--in Cisco's case WebEx--and best-in-class security and Wi-Fi and failover options, Nightingale said.


Silver linings: 7 ways CIOs say IT has changed for good

A positive change was the unbridled collaboration and coming together – without traditional borders or silos – to solve the exceptional challenges the pandemic threw at us. COVID-19 triggered physical social distancing while at the same time it bolstered digital connectedness and accelerated a culture shift to a more flexible work model. There was a pervasive focus on the wellbeing of each individual, and an intentional effort to hear from each person, which further diversifies input and insights to solve problems. The Sappi team came together in this manner and continues to carry forward those positive elements of inclusive and optimistic collaboration to navigate each effort with confidence that we will have a thriving future ahead. ... From the start of the pandemic, we leveraged these competencies and our fortitude to successfully solve business challenges and meet our goals and objectives. The demand for digital experiences and customers’ expectations for seamless digital offerings continues to increase, and MassMutual’s digital and technology advancements and digital-first mindset allow us to offer more modern tools at lower costs and provide an overall better customer experience.


What should IT leaders look for in an SD-WAN solution?

Delivering high performance, affordable SD-WAN solutions is not something everyone can do. For that reason, when an IT leader complains of connectivity speeds, the easier option is for providers to simply recommend more bandwidth. And, with the cost of circuits falling, it’s hard to push back on this apparent resolution. However, for many businesses, traditional networks will no longer be fit for purpose. We’re not all in the same network anymore, so it’s not a case of routing all the traffic into one place, through a huge firewall, and back out. The SD-WAN alternative sounds complex, and it really is – we’re talking an intelligent, responsive, end-to-end encrypted network with AI at its heart, after all. However, from the IT leader’s perspective, it is deployed with zero touch provisioning, no hardware installations, and self-configuration for ultimate ease. IT teams are here to deliver IT services, after all. They don’t want to be held back by infrastructure constraints. It’s about time that tech enabled them to do their jobs.



Quote for the day:

"Nothing is so potent as the silent influence of a good example." -- James Kent

Daily Tech Digest - July 29, 2021

How enterprise architects need to evolve to survive in a digital world

Traditionally, enterprise architects needed to be able to translate business needs into IT requirements or figure out how to negotiate a better IT system deal. That’s still important, but now they also need to be able to talk to board members and executive teams about the business implications of technology decisions, particularly around M&A. If the CEO wants to be able to acquire and divest new companies every year, the enterprise architect needs to explain the system landscape that requires, and in a merger context, what systems to merge and how. If the company invests in a new enterprise resource planning (ERP) system, the enterprise architect should be able to articulate the implications and the effect on the P&L. This level of conversation cannot be based on boxes and diagrams on PowerPoint, which is often the default but a largely theoretical approach. Instead, enterprise architects have to be able to use practical “business” language to communicate and articulate the ROI of architecture decisions and how they contribute to business-outcome key performance indicators.


New Android Malware Uses VNC to Spy and Steal Passwords from Victims

"The actors chose to steer away from the common HTML overlay development we usually see in other Android banking Trojans: this approach usually requires a larger time and effort investment from the actors to create multiple overlays capable of tricking the user. Instead, they chose to simply record what is shown on the screen, effectively obtaining the same end result." ... What's more, the malware employs ngrok, a cross-platform utility used to expose local servers behind NATs and firewalls to the public internet over secure tunnels, to provide remote access to the VNC server running locally on the phone. Additionally, it also establishes connections with a command-and-control (C2) server to receive commands over Firebase Cloud Messaging (FCM), the results of which, including extracted data and screen captures, are then transmitted back to the server. ThreatFabric's investigation also connected Vultur with another well-known piece of malicious software named Brunhilda, a dropper that utilizes the Play Store to distribute different kinds of malware in what's called a "dropper-as-a-service" (DaaS) operation, citing overlaps in the source code and C2 infrastructure used to facilitate attacks.


DevOps still 'rarely done well at scale' concludes report after a decade of research

A cross-functional team is one that spans the whole application lifecycle from code to deployment, as opposed to a more specialist team that might only be concerned with database administration, for example. Are cross-functional teams a good thing? "It depends," Kersten said. "There are underlying strata of technology that are better off centralized, particularly if you've got regulatory burdens, but that doesn't mean you shouldn't have cross-functional teams … too far in either direction is definitely terrible. The biggest problem we see is if there isn't a culture of sharing practices amongst each other." One thing to avoid, said Kersten, is a DevOps team. "I think we've broken the term DevOps team inside organisations," he told us. "I think it has passed beyond useful … calling your folk DevOps engineers or cloud engineers, these sorts of imprecise titles are not particularly useful, and DevOps is particularly broken." What if an organization reads the report and realises that it is not good at public cloud and not effective at DevOps, what should it do? "First optimize for the team," said Kersten.


DeepMind Launches Evaluation Suite For Multi-Agent Reinforcement Learning

Melting Pot is a new evaluation technique that assesses generalisation to novel situations that consist of known and unknown individuals. It can test a broad range of social interactions such as cooperation, deception, competition, trust, reciprocation, stubbornness, etc. Unlike multi-agent reinforcement learning (MARL) that lacks a broadly accepted benchmark test, single-agent reinforcement learning (SARL) has a diverse set of benchmarks suitable for different purposes. Further, MARL has a relatively less favourable evaluation landscape compared to other machine learning subfields. Melting pot offers a set of 21 MARL multi-agent games or ‘substrates’ to train agents on and more than 85 unique test scenarios for evaluating these agents. A central equation– Substrate+Background Population=Scenario–captures the true essence of the Melting pot technique. The term substrate refers to a partially observable general sum Markov game; a Melting Pot substrate is a game of imperfect information that each player possesses which is unknown to their co-players. It includes the layout of the map, how objects are located, and how they move. 


What to Look for When Scaling Your Data Team

Today, data-driven innovation has become a strategic imperative for just about every company, in every industry. But as organizations expand their investment in analytics, AI/ML, business intelligence, and more, data teams are struggling to keep pace with the expectations of the business. Businesses will only continue to rely more heavily on their data teams. However, recent survey research suggests that 96% of data teams are already at or over their work capacity. To avoid leaving their teams in a lurch, many organizations will need to significantly scale their data team’s operations, both in terms of efficiencies and team size. In fact, 79% of data teams indicated that infrastructure is no longer the scaling problem — this puts the focus on people and team capacity. But what should managers look for when growing their teams? And what tools can provide relief for their already overburdened staff? The first step that managers of data teams must do is to evaluate their teams’ current skills in close alignment with the projected needs of the business. Doing so can provide managers with a deeper understanding of what skill sets to look for when interviewing candidates.


Eight Signs Your Agile Testing Isn’t That Agile

When you have a story in a sprint, and you find an issue with that story, what do you do? For many teams, the answer is still “file a defect.” In waterfall development, test teams would get access to a new build with new features all at once. They would then start a day-, week-, or even month-long testing cycle. Given the amount of defects that would be found and the time duration between discovery and fixing, it was critical to document every single one. This documentation is not necessary in Agile development. When you find an issue, collaborate with the developer and get the issue fixed, right then and there, in the same day or at least in the same sprint. If you need to persist information about the defect, put it in the original story. There is no need to introduce separate, additional documentation. There are only two reasons you should create a defect. One: an issue was found for previously completed work, or for something that is not tied to any particular story. This issue needs to be recorded as a defect and prioritized. (But, see next topic!) 


Mitre D3FEND explained: A new knowledge graph for cybersecurity defenders

D3FEND is the first comprehensive examination of this data, but assembling it wasn’t without its difficulties. Using the patent database as original source material for this project was both an inspiration and a frustration. Kaloroumakis got the idea when he had to review patent filings when he was CTO of Bluvector.io, a security company, before he came to Mitre. “There is an incredible variance in technical specifics across the patent collection,” he says. “With some patents, little is left to your imagination, but others are more generic and harder to figure out.” He was surprised at the thousands of cybersecurity patent filings he found. “Some vendors have more than a hundred filings,” he said and noted that he has not cataloged every single cybersecurity patent in the collection. Instead, he has used the collection as a means to an end, to create the taxonomies and knowledge graph for the project. He also wanted to emphasize that just because a technology or a particular security method is mentioned in a patent filing doesn’t mean that this method actually finds its way into the actual product.


Benefits of Loosely Coupled Deep Learning Serving

Another convincing aspect of choosing a message-mediated DL serving is its easy adaptability. There exists a learning curve for any web framework and library even for micro-frameworks such as Flask if one wants to exploit its full potential. On the other hand, one does not need to know the internals of messaging middleware, furthermore all major cloud vendors provide their own managed messaging services that take maintenance out of the engineers’ backlog. This also has many advantages in terms of observability. As messaging is separated from the main deep learning worker with an explicit interface, logs and metrics can be aggregated independently. On the cloud, this may not be even needed as managed messaging platforms handle logging automatically with additional services such as dashboards and alerts. The same queuing mechanism lends itself to auto-scalability natively as well. Stemming from high observability, what queuing brings is the freedom to choose how to auto-scale the workers. In the next section, an auto-scalable container deployment of DL models will be shown using KEDA


Should You Trust Low Code/No Code for Mission-Critical Applications?

More enterprises now understand the value of low code and no code, though the differences between those product categories are worth considering. Low code is aimed at developers and power users. No code targets non-developers working in lines of business. The central idea is to get to market faster than is possible with traditional application development. ... In some cases, it makes a lot of sense to use low code, but not always. In Frank's experience, an individual enterprise's requirements tend to be less unique than the company believes and therefore it may be wiser to purchase off-the-shelf software that includes maintenance. For example, why build a CRM system when Salesforce offers a powerful one? In addition, Salesforce employs more developers than most enterprises. About six years ago, Bruce Buttles, digital channels director at health insurance company Humana, was of the opinion that low code/no code systems "weren't there yet," but he was ultimately proven wrong. "I looked at them and spent about three months building what would be our core product, four or five different ways using different platforms. I was the biggest skeptic," said Buttles.


Confidence redefined: The cybersecurity industry needs a reboot

As businesses continue to adjust to the virtual and flex workplace, a common fear is loss of productivity and, ultimately, damage to their bottom line. While many enterprises were already on a “digital transformation” journey, this new dynamic has added the need for fresh thinking. As a result, many organizations are implementing new applications to ensure day-to-day activities remain seamless, but are unknowingly — or, in some cases, knowingly — sacrificing security in the process. This is an expansive area of risk for many businesses. Truth be told, the human (and even non-human) workforce will always come with a certain risk level, but now a distributed workforce often provides malicious actors with more opportunities to do their dirty work; most organizations have created a larger “attack surface” as a result of the pandemic. To allow their businesses to thrive going forward, the key for leaders in both IT and business is to focus on enablement and security – providing access to important technology and tools but properly controlling access to keep your business and your customers’ critical assets protected.



Quote for the day:

"Leadership is familiar, but not well understood." -- Gerald Weinberg

Daily Tech Digest - July 28, 2021

DevOps Is Dead, Long Live AppOps

The NoOps trend aims to remove all the frictions between development and the operation simply removing it, as the name tells. This may seem a drastic solution, but we do not have to take it literally. The right interpretation — the feasible one — is to remove as much as possible the human component in the deployment and delivery phases. That approach is naturally supported by the cloud that helps things to work by themself. ... One of the most evident scenarios that explain the benefit of AppOps is every application based on Kubernetes. If you will open each cluster you will find a lot of pod/service/deployment settings that are mostly the same. In fact, every PHP application has the same configuration, except for parameters. Same for Java, .Net, or other applications. The matter is that Kubernetes is agnostic to the content of the host's applications, so he needs to inform it about every detail. We have to start from the beginning for all new applications even if the technology is the same. Why? I should explain only once how a PHP application is composed. 


Thrill-K: A Blueprint for The Next Generation of Machine Intelligence

Living organisms and computer systems alike must have instantaneous knowledge to allow for rapid response to external events. This knowledge represents a direct input-to-output function that reacts to events or sequences within a well-mastered domain. In addition, humans and advanced intelligent machines accrue and utilize broader knowledge with some additional processing. I refer to this second level as standby knowledge. Actions or outcomes based on this standby knowledge require processing and internal resolution, which makes it slower than instantaneous knowledge. However, it will be applicable to a wider range of situations. Humans and intelligent machines need to interact with vast amounts of world knowledge so that they can retrieve the information required to solve new tasks or increase standby knowledge. Whatever the scope of knowledge is within the human brain or the boundaries of an AI system, there is substantially more information outside or recently relevant that warrants retrieval. We refer to this third level as retrieved external knowledge.


GitHub’s Journey From Monolith to Microservices

Good architecture starts with modularity. The first step towards breaking up a monolith is to think about the separation of code and data based on feature functionalities. This can be done within the monolith before physically separating them in a microservices environment. It is generally a good architectural practice to make the code base more manageable. Start with the data and pay close attention to how they’re being accessed. Make sure each service owns and controls access to its own data, and that data access only happens through clearly defined API contracts. I’ve seen a lot of cases where people start by pulling out the code logic but still rely on calls into a shared database inside the monolith. This often leads to a distributed monolith scenario where it ends up being the worst of both worlds - having to manage the complexities of microservices without any of the benefits. Benefits such as being able to quickly and independently deploy a subset of features into production. Getting data separation right is a cornerstone in migrating from a monolithic architecture to microservices. 


Data Strategy vs. Data Architecture

By being abstracted from the problem solving and planning process, enterprise architects became unresponsive, he said, and “buried in the catacombs” of IT. Data Architecture needs to look at finding and putting the right mechanisms in place to support business outcomes, which could be everything from data systems and data warehouses to visualization tools. Data architects who see themselves as empowered to facilitate the practical implementation of the Business Strategy by offering whatever tools are needed will make decisions that create data value. “So now you see the data architect holding the keys to a lot of what’s happening in our organizations, because all roads lead through data.” Algmin thinks of data as energy, because stored data by itself can’t accomplish anything, and like energy, it comes with significant risks. “Data only has value when you put it to use, and if you put it to use inappropriately, you can create a huge mess,” such as a privacy breach. Like energy, it’s important to focus on how data is being used and have the right controls in place. 


Why CISA’s China Cyberattack Playbook Is Worthy of Your Attention

In the new advisory, CISA warns that the attacks will also compromise email and social media accounts to conduct social engineering attacks. A person is much more likely to click on an email and download software if it comes from a trusted source. If the attacker has access to an employee's mailbox and can read previous messages, they can tailor their phishing email to be particularly appealing – and even make it look like a response to a previous message. Unlike “private sector” criminals, state-sponsored actors are more willing to use convoluted paths to get to their final targets, said Patricia Muoio, former chief of the NSA’s Trusted System Research Group, who is now general partner at SineWave Ventures. ... Private cybercriminals look for financial gain. They steal credit card information and health care data to sell on the black market, hijack machines to mine cryptocurrencies, and deploy ransomware. State-sponsored attackers are after different things. If they plan to use your company as an attack vector to go after another target, they'll want to compromise user accounts to get at their communications. 


Breaking through data-architecture gridlock to scale AI

Organizations commonly view data-architecture transformations as “waterfall” projects. They map out every distinct phase—from building a data lake and data pipelines up to implementing data-consumption tools—and then tackle each only after completing the previous ones. In fact, in our latest global survey on data transformation, we found that nearly three-quarters of global banks are knee-deep in such an approach.However, organizations can realize results faster by taking a use-case approach. Here, leaders build and deploy a minimum viable product that delivers the specific data components required for each desired use case (Exhibit 2). They then make adjustments as needed based on user feedback. ... Legitimate business concerns over the impact any changes might have on traditional workloads can slow modernization efforts to a crawl. Companies often spend significant time comparing the risks, trade-offs, and business outputs of new and legacy technologies to prove out the new technology. However, we find that legacy solutions cannot match the business performance, cost savings, or reduced risks of modern technology, such as data lakes. 


Data-Intensive Applications Need Modern Data Infrastructure

Modern applications are data-intensive because they make use of a breadth of data in more intricate ways than anything we have seen before. They combine data about you, about your environment, about your usage and use that to predict what you need to know. They can even take action on your behalf. This is made possible because of the data made available to the app and data infrastructure that can process the data fast enough to make use of it. Analytics that used to be done in separate applications (like Excel or Tableau) are getting embedded into the application itself. This means less work for the user to discover the key insight or no work as the insight is identified by the application and simply presented to the user. This makes it easier for the user to act on the data as they go about accomplishing their tasks. To deliver this kind of application, you might think you need an array of specialized data storage systems, ones that specialize in different kinds of data. But data infrastructure sprawl brings with it a host of problems.  


The Future of Microservices? More Abstractions

A couple of other initiatives regarding Kubernetes are worth tracking. Jointly created by Microsoft and Alibaba Cloud, the Open Application Model (OAM) is a specification for describing applications that separate the application definition from the operational details of the cluster. It thereby enables application developers to focus on the key elements of their application rather than the operational details of where it deploys. Crossplane is the Kubernetes-specific implementation of the OAM. It can be used by organizations to build and operate an internal platform-as-a-service (PaaS) across a variety of infrastructures and cloud vendors, making it particularly useful in multicloud environments, such as those increasingly commonly found in large enterprises through mergers and acquisitions. Whilst OAM seeks to separate out the responsibility for deployment details from writing service code, service meshes aim to shift the responsibility for interservice communication away from individual developers via a dedicated infrastructure layer that focuses on managing the communication between services using a proxy. 


Navigating data sovereignty through complexity

Data sovereignty is the concept that data is subject to the laws of the country which it is processed in. In a world where there is a rapid adoption of SaaS, cloud and hosted services, it becomes obvious to see the issues that data sovereignty can have. In simpler times, data wasn’t something businesses needed to be concerned about and could be shared and transferred freely with no consequence. Businesses that also had a digital presence operated on a small scale and with low data demands hosted on on-premise infrastructure. This meant that data could be monitored and kept secure, much different from the more distributed and hybrid systems that many businesses use today. With so much data sharing and lack of regulation, it all came crashing down with the Cambridge Analytica scandal in 2016, promoting strict laws on privacy. ... When dealing with on-premise infrastructure, governance is clearer, as it must follow the rules of the country it’s in. However, when it’s in the cloud, a business can store its data in any number of locations regardless of where the business itself is.


How security leaders can build emotionally intelligent cybersecurity teams

EQ is important, as it has been found by Goleman and Cary Cherniss to positively influence team performance and to cultivate positive social exchanges and social support among team members. However, rather than focusing on cultivating EQ, cybersecurity leaders such as CISOs and CIOs are often preoccupied by day-to-day operations (e.g., dealing with the latest breaches, the latest threats, board meetings, team meetings and so on). In doing so, they risk overlooking the importance of the development and strengthening of their own emotional intelligence (EQ) and that of the individuals within their teams. As well as EQ considerations, cybersecurity leaders must also be conscious of the team’s makeup in terms of gender, age and cultural attributes and values. This is very relevant to cybersecurity teams as they are often hugely diverse. Such values and attributes will likely introduce a diverse set of beliefs defined by how and where an individual grew up and the values of their parents. 



Quote for the day:

"The mediocre leader tells The good leader explains The superior leader demonstrates The great leader inspires." -- Buchholz and Roth