Daily Tech Digest - September 21, 2021

Cybersecurity Priorities in 2021: How Can CISOs Re-Analyze and Shift Focus?

The level of sophistication of attacks has increased manifold in the past couple of years. Attackers leveraging advanced technology to infiltrate company networks and gain access to mission-critical assets. Given this scenario, organizations too need to leverage futuristic technology such as next-gen WAF, intelligent automation, behavior analytics, deep learning, security analytics, and so on to prevent even the most complex and sophisticated attacks. Automation also enables organizations to gain speed and scalability in the broader IT environment with ramped-up attack activity. Security solutions like Indusface's AppTrana enable all this and more. ... Remote work is here to stay, and the concept of the network perimeter is blurring. For business continuity, organizations have to enable access of mission-critical assets to employees wherever they are. Employees are probably accessing these resources from personal, shared devices and unsecured networks. CISOs need to think strategically and implement borderless security based on a zero-trust architecture.


Benefits of cloud computing: The pros and cons

Cloud computing management raises many information systems management issues that include ethical (security, availability, confidentiality, and privacy) issues, legal and jurisdictional issues, data lock-in, lack of standardized service level agreements (SLAs), and customisation technological bottlenecks, and others. Sharing a cloud provider has some associated risks. The most common cloud security issues include unauthorized access through improper access controls and the misuse of employee credentials. According to industry surveys, unauthorized access and insecure APIs are tied for the No. 1 spot as the single biggest perceived security vulnerability in the cloud. Others include internet protocol vulnerabilities, data recovery vulnerability, metering, billing evasion, vendor security risks, compliance and legal risks, and availability risks. When you store files and data in someone else's server, you're trusting the provider with your crown jewels. Whether in a cloud or on a private server, data loss refers to the unwanted removal of sensitive information, either due to an information system error or theft by cybercriminals. 


Progressing from a beginner to intermediate developer

In all your programming, you should aim to have a single source of truth for everything. This is the core idea behind DRY - Don't Repeat Yourself - programming. In order to not repeat yourself, you need to define everything only once. This plays out in different ways depending on the context. In CSS, you want to store all the values that appear time and time again in variables. Colors, fonts, max-widths, even spacing such as padding or margins are all properties that tend to be consistent across an entire project. You can often define variables for a stylesheet based on the brand guidelines, if you have access. Otherwise it's a good idea to go through the site designs and define your variables before starting. In JavaScript, every function you write should only appear once. If you need to reuse it in a different place, isolate it from the context you're working in by putting it into it's own file. You'll often see a util folder in JavaScript file structures - generally this is where you'll find more generic functions used across the app. Variables can also be sources of truth. 


SRE vs. DevOps: What are the Differences?

Site Reliability Engineering, or SRE, is a strategy that uses principles rooted in software engineering to make systems as reliable as possible. In this respect, SRE, which was made popular by Google starting in the mid-2000s, facilitates a shared mindset and shared tooling between software development and IT operations. Instead of writing software using one set of strategies and tools, then managing it using an entirely different set, SRE helps to integrate each practice together by orienting both around concepts rooted in software engineering. Meanwhile, DevOps is a philosophy that, at its core, encourages developers and IT operations teams to work closely together. The driving idea behind DevOps is that when developers have visibility into the problems IT operations teams experience in production, and IT operations teams have visibility into what developers are building as they push new application releases down the development pipeline, the end result is greater efficiency and fewer problems for everyone.


Distributed transaction patterns for microservices compared

The technical requirements for two-phase commit are that you need a distributed transaction manager such as Narayana and a reliable storage layer for the transaction logs. You also need DTP XA-compatible data sources with associated XA drivers that are capable of participating in distributed transactions, such as RDBMS, message brokers, and caches. If you are lucky to have the right data sources but run in a dynamic environment, such as Kubernetes, you also need an operator-like mechanism to ensure there is only a single instance of the distributed transaction manager. The transaction manager must be highly available and must always have access to the transaction log. For implementation, you could explore a Snowdrop Recovery Controller that uses the Kubernetes StatefulSet pattern for singleton purposes and persistent volumes to store transaction logs. In this category, I also include specifications such as Web Services Atomic Transaction (WS-AtomicTransaction) for SOAP web services. 


5 observations about XDR

Today’s threat detection solutions use a combination of signatures, heuristics, and machine learning for anomaly detection. The problem is that they do this on a tactical basis by focusing on endpoints, networks, or cloud workloads alone. XDR solutions will include these tried-and-true detection methods, only in a more correlated way on layers of control points across hybrid IT. XDR will go further than existing solutions with new uses of artificial intelligence and machine learning (AI/ML). Think “nested algorithms” a la Russian dolls where there are layered algorithms to analyze aberrant behavior across endpoints, networks, clouds, and threat intelligence. Oh, and it kind of doesn’t matter which security telemetry sources XDR vendors use to build these nested algorithms, as long as they produce accurate high-fidelity alerts. This means that some vendors will anchor XDR to endpoint data, some to network data, some to logs, and so on. To be clear, this won’t be easy: Many vendors won’t have the engineering chops to pull this off, leading to some XDR solutions that produce a cacophony of false positive alerts.


Why quantum computing is a security threat and how to defend against it

First, public key cryptography was not designed for a hyper-connected world, it wasn't designed for an Internet of Things, it's unsuitable for the nature of the world that we're building. The need to constantly refer to certification providers for authentication or verification is fundamentally unsuitable. And of course the mathematical primitives at the heart of that are definitely compromised by quantum attacks so you have a system which is crumbling and is certainly dead in a few years time. A lot of the attacks we've seen result from certifications being compromised, certificates expiring, certificates being stolen and abused. But with the sort of computational power available from a quantum computer blockchain is also at risk. If you make a signature bigger to guard against it being cracked the block size becomes huge and the whole blockchain grinds to a halt. Think of the data centers as buckets, three times a day the satellites throw some random numbers into the buckets and all data centers end up with an identical bucket full of identical sets of random information. 


Government data management for the digital age

Despite the complexity and lengthy time horizon of a holistic effort to modernize the data landscape, governments can establish and sustain a focus on rapid, tangible impact. A failure to deliver results from the outset can undermine stakeholder support. In addition, implementing use cases early on helps governments identify gaps in their data landscapes (for example, useful information that is not stored in any register) and missing functionalities in the central data-exchange infrastructure. To deliver impact quickly, governments may deploy “data labs”—agile implementation units with cross-functional expertise that focus on specific use cases. Solutions are rapidly developed, tested, iterated and, once successful, rolled out at scale. The German government is pursuing this approach in its effort to modernize key registers and capture more value. ... Organizations such as Estonia’s Information System Authority or Singapore’s Government Data Office have played a critical role in transforming the data landscape of their respective countries. 


Abductive inference: The blind spot of artificial intelligence

AI researchers base their systems on two types of inference machines: deductive and inductive. Deductive inference uses prior knowledge to reason about the world. This is the basis of symbolic artificial intelligence, the main focus of researchers in the early decades of AI. Engineers create symbolic systems by endowing them with a predefined set of rules and facts, and the AI uses this knowledge to reason about the data it receives. Inductive inference, which has gained more traction among AI researchers and tech companies in the past decade, is the acquisition of knowledge through experience. Machine learning algorithms are inductive inference engines. An ML model trained on relevant examples will find patterns that map inputs to outputs. In recent years, AI researchers have used machine learning, big data, and advanced processors to train models on tasks that were beyond the capacity of symbolic systems. A third type of reasoning, abductive inference, was first introduced by American scientist Charles Sanders Peirce in the 19th century. 


Software Engineering is a Loser’s Game

Nothing is more frustrating as a code reviewer than reviewing someone else’s code who clearly didn’t do these checks themselves. It wastes the code reviewer’s time when he has to catch simple mistakes like commented out code, bad formatting, failing unit tests, or broken functionality in the code. All of these mistakes can easily be caught by the code author or by a CI pipeline. When merge requests are frequently full of errors, it turns the code review process into a gatekeeping process in which a handful of more senior engineers serve as the gatekeepers. This is an unfavorable scenario that creates bottlenecks and slows down the team’s velocity. It also detracts from the higher purpose of code reviews, which is knowledge sharing. We can use checklists and merge request templates to serve as reminders to ourselves of things to double check. Have you reviewed your own code? Have you written unit tests? Have you updated any documentation as needed? For frontend code, have you validated your changes in each browser your company supports? 



Quote for the day:

"Effective leadership is not about making speeches or being liked; leadership is defined by results not attributes." -- Peter Drucker

Daily Tech Digest - September 20, 2021

Leadership and emotional intelligence

EI provides unique psychological resources to exert cognitive regulation over negative effects of emotions, whether positive or negative, to maintain the leaders’ vision or value driven behavior. In simple words, EI is defined as cognitively controlled affective (emotional) processes to perform under stressful conditions. For example, EI is required when a person or a team loses a couple of matches e.g., Indian women hockey team lost their first three matches then won next three matches and entered into the semi-final. Thus it is EI that helps manage the stress generated after consecutive successes or defeats. Otherwise, sadness, grief, fear, anxiety could have taken over their mental capacities, hence it simply means that intelligence (IQ) works well when emotions are kept under control because rationality is not an absolute construct rather it is bounded by personal and situational constraints. It facilitates in regulating the emotions in self and others both. Meanwhile, emotions release a sustainable source of energy that helps achieve one’s long-term vision and mission of transformation for an organization or a country. 


What businesses need to know about data decay

There are several scenarios that can lead to data decay. The most common occurrence is when customer records – such as sales, marketing and CRM data – are not maintained. In systems that are constantly changing and evolving to meet business needs, linkages and completeness of data sets can quickly become broken and out of date if not properly maintained. Typically, there is no single source of data in any organization but instead data repositories span multiple platforms, formats, and views. Another factor leading to data decay is the human element. Often at some point in the journey, data is manually entered. The moment a mistype or incorrect information is entered into a system, data inconsistency, poor data hygiene and decay can occur. Enterprises are copying data at an average of 12 times per file, which means that a single mistake can have a compounded impact with exponential damages. Furthermore, all data has a lifecycle — meaning data is created, used and monitored and, at some point, it becomes no longer appropriate to store and must be securely disposed of.


MicroStream 5.0 is Now Open Source

MicroStream is not a complete replacement for a database management system (DBMS), since it lacks user management, connection management, session handling etc., but in the vision of the developers of MicroStream, those features could be better implemented in dedicated server applications. MicroStream considers DBMS an inefficient way of persisting data since every database has its data structure and, hence, data must be converted and mapped with an additional layer such as an object relational mapper (ORM). These frameworks add complexity, increase latency and introduce performance loss. The MicroStream Data-Store technology removes the need for these conversions and everything can be stored directly in memory, making it super fast for queries and simplifying the architecture using just plain Java. According to their website, performance is increased by 10x for a simple query with a peak of 1000x for a complex query with aggregation (sum) compared to JPA. Alternatively, they also offer connectors for databases like Postgres, MariaDB, SQLite and Plain-file storage (even on the cloud) to persist data.


5 Techniques to work with Imbalanced Data in Machine Learning

For classification tasks, one may encounter situations where the target class label is un-equally distributed across various classes. Such conditions are termed as an Imbalanced target class. Modeling an imbalanced dataset is a major challenge faced by data scientists, as due to the presence of an imbalance in the data the model becomes biased towards the majority class prediction. Hence, handling the imbalance in the dataset is essential prior to model training. There are various things to keep in mind while working with imbalanced data. ... Upsampling or Oversampling refers to the technique to create artificial or duplicate data points or of the minority class sample to balance the class label. There are various oversampling techniques that can be used to create artificial data points. ... Undersampling techniques is not recommended as it removes the majority class data points. Oversampling techniques are often considered better than undersampling techniques. The idea is to combine the undersampling and oversampling techniques to create a robust balanced dataset fit for model training.


Why you need a personal laptop

Even if you leave your company with plenty of notice, moving a bunch of things off your work device in the last few days of your tenure could raise some eyebrows with IT — who, remember, can see everything you’re doing on that device. “Let’s say you’re going to work at a competitor,” Toohil says. “They’re gonna go through that huge audit trail, see, wow, you moved a bunch of data off this laptop in the week before you left. And that opens up a huge liability for you personally. At a minimum, you’re going to spend some time explaining what you were doing. In the worst case, you took some corporate information.” ... And if things go wrong, the list of embarrassing possibilities is endless: do you really want to be this woman, who received a text message about pooping on her computer while sharing her screen with executives? Or this employee, who accidentally posted fetish porn in a company-wide group chat? ...  If you’re mixing work and pleasure on one device, just one mistaken email attachment or one incorrect copy / paste could lead to scenarios that aren’t just embarrassing but could harm your relationships with co-workers and even jeopardize your job.


What is developer relations? Understanding the 'glue' that keeps software and coders together

Developer relations can take different forms and can mean different things to different organizations. It can involve: talking about a vendor's app at a conference; creating tutorials and walkthrough videos for YouTube; creating app resources for GitHub or responding to questions from developers on Stack Overflow. At its core, however, DevRel is about building rapport with the developer community and leveraging this to figure out how to build successful software applications. In this sense, developer relations is about closing the feedback loop and creating a bridge between the people who use the software and the wider organization, says Lorna Mitchell, head of developer relations at open-source software company, Aiven. "You need a way to speak to your developers," Mitchell tells ZDNet. "You have to be there – to be in the communities where the developers are. If someone has a question about your product on Stack Overflow, you want to be responding to that." Mitchell describes developer relations as a "glue" role, which is why it's common to see it report into different parts of an organization.


Curate Your Thought Leadership: 3 Pro Tips

Of course goals, habit and systems need to be measured to determine if you are making headway. Social channels don’t let us peek behind the curtain of the algorithm, making it a challenge to see if we are getting the most traction. Duritsa’s metric of choice is to see how many viewers click through to his profile. Your goal might be to measure engagement with your posts. Social media expert Marie Incontrera of Incontrera Consulting works with clients on thought leadership strategies that include social media, podcasting and speaking engagements, including TEDx. She suggests a win is a 1% engagement rate for LinkedIn. For example, if your post gets 100 views, then one engagement – a reaction or a comment – is good. She also says that if you post every day you’ll goose the algorithm into “super poster” status where your posts are amplified further than if you post less frequently. As you track your results, look at what topics get the most attention from your audience, determine what is resonating with them, then dial up what they like and phase out the types of posts that might fall flat. 


What role must CDO’s play in today’s new working environment

As businesses look for ways to insulate themselves from future shock and deliver new and constantly evolving ways to deliver ROI, the workforce will need to embrace new data skills and technologies to provide insights faster and speed decision making to inform the business. This all mandates the need for a digital cartographer — a CDO — whose role will be to help prioritise the dissemination of data to improve data access and the development of an always-on approach to upskilling and a data culture across the business. Through spearheading data democratisation across the organisation, a CDO can empower a dispersed hybrid data literate workforce to deliver data-led insights by providing them with the right data tools to make that goal a reality. By providing access to data and analytics through easy-to-use, code-friendly self-service platforms, the CDO can create space for employees who want to upskill and become skilled knowledge workers themselves. Democratising access to these resources puts data science tools into the hands of the people with problems to solve – not exclusively those with years of experience or a specific university degree.


How to Craft Incident Response Procedures After a Cybersecurity Breach

As with all types of battles, cybersecurity is a game of preparation. Long before an incident occurs, trained security teams should know how to execute an incident response procedure in a timely and effective manner. To prepare your incident response plan, you must first review your existing protocols and examine critical business areas that could be targeted in an attack. Then, you must work to train your current teams to respond when a threat occurs. You must also conduct regular threat exercises to keep this training fresh in everyone's minds. ... Even with the best preparation, breaches still happen. For this reason, the next stage of an incident response procedure is to actively monitor possible threats. Cybersecurity professionals can use many intrusion prevention systems to find an active vulnerability or detect a breach. Some of the most common forms of these systems include signature, anomaly, and policy-based mechanisms. Once a threat is detected, these systems should also alert security and management teams without causing unnecessary panic.


How to retain the best talent in a competitive cybersecurity market

In cybersecurity, employees are often exposed to several aspects of technology and innovation. What I’ve learned from several conversations with employees is that ultimately, people want to work for organizations that are developing cutting-edge technology and making a real impact in the industry. They want to contribute to the solutions that are solving today’s most important problems – and in IT security, where cyber threats are looming and threatening organizations regularly, there’s an immense opportunity to play such a rewarding, impactful role. It’s up to the employers to share a vision with employees. Employees must realize how their contributions impact the company, customers, and the landscape. Often, employees may not realize that they’re contributing to solving a major, real-world issue, so it’s up to leadership – including HR leaders – to regularly communicate why the company exists, the difference it’s making, and how each employee plays a role in the responsibility. What attracts security professionals to a company is the power and impact of the technology and the experience they can receive.



Quote for the day:

"Leaders must know where they are going if they expect others to willingly join them on the journey." -- Kouzes & Posner

Daily Tech Digest - September 19, 2021

The main types of IoT sensors in the market today

IoT sensors are now available in a variety of sizes, as well, allowing for more portability and increased ease of use. This, in turn, has played a key role in establishing new use cases, which have entered various industries. “Sensors have developed from electro mechanical devices, to micro electro mechanical devices (MEMS), to nano electro mechanical devices (NEMS) and now Bio-MEMS, which can sense molecular level changes,” said Deepak Parameswaran, chief business officer for Industry NxT at Mindtree. ... Manufacturing is another industry with plenty of use cases for IoT sensor technology, as well as much potential going forward. With the Industrial Internet of Things (IIoT) aiding innovation in a sector that’s been otherwise slower to adopt digital processes, this trend is showing no signs of slowing down. Richard Simmons, group vice-president of technology – IoT at Logicalis Group, explained: “One of our customers had dedicated employees to go around with clipboards and climb up cranes and large, often dangerous, equipment just to write down how long it was used for. Then they go back to their office and record it.


IoT Will Spur Diversification In Indian Telecom

Specifically, narrowband IoT (NB-IoT) is unleashing powerful machine and sensor connectivity, delivering specific data, low latency, and increased power efficiency. And, it’s likely to drive millions of different types of connections and use cases. Connecting billions of devices presents challenges due to several concerns -- security, standardisation, authentication, and ubiquitous connectivity, the number one roadblock when deploying IoT. Nowhere is this dynamic more apparent than in India, a country largely connected using inadequate terrestrial telecom networks and very limited coverage across India’s vast hinterlands. Today, connectivity remains intermittent at best, often failing totally, while many still experience non-existent coverage in remote areas, where remote farms operate, at the borders, at rural power line stations, at last-mile distribution centres, far out to sea, and many other industrial operations. Even as IoT deployments grow to connect billions of machines, the increased volume of devices will take the deployments into remote parts, where they will experience little or no connectivity - and what connectivity is available will not be affordable.


Don’t Leave Your APIs Undefended Against Security Risks

Speaking of silos, disparate security approaches also create silos that can affect visibility. This can hinder threat detection and complicate the organization’s ability to see the full scope of a security incident. When creating a cloud security strategy, DevOps teams should consider adopting and implementing consistent policies that work in, across and outside of cloud environments. Use tools that allow for security configurations that can be centrally applied, tested and updated, and that support creating a consolidated view of the threats you face. This kind of consolidated view will also help security teams focus more on response and less on collecting information. A security platform that includes WAAP functionality combined with a common management, analysis and orchestration interface can help. This platform approach should include API security controls that can be deployed for every exposed API, which could include APIs deployed in multicloud and hybrid environments. The solutions you implement should also have the ability to block API threats using a WAF or other API gateway. 


Kamikaze satellites and shuttles adrift: Why cyberattacks are a major threat to humanity's ambitions in space

Although there are currently no known examples of cybercriminals hacking directly into satellites, vulnerabilities in the user and ground segments have been exploited in attempt to alter the flight path of satellites in orbit. “By design, every piece of infrastructure has entry points, each of which has the potential to create opportunities for attackers,” said Yamout. “On Earth, with all the advancements and new technologies, we have a relatively good level of security protection. But in space systems, the protections are much more basic.” “With evolving technology and science, it is likely we will visit space more than we used to. Cybersecurity has to be considered when designing space systems in all layers and must integrate in all segments and phases of the space domain evolution.” No matter how well space infrastructure is protected, however, criminals will find a way to launch attacks. The question then becomes: who and why? At the moment, the incentives for cyber actors to launch attacks against space infrastructure are relatively few.


Computer vision can help spot cyber threats with startling accuracy

The traditional way to detect malware is to search files for known signatures of malicious payloads. Malware detectors maintain a database of virus definitions which include opcode sequences or code snippets, and they search new files for the presence of these signatures. Unfortunately, malware developers can easily circumvent such detection methods using different techniques such as obfuscating their code or using polymorphism techniques to mutate their code at runtime. Dynamic analysis tools try to detect malicious behavior during runtime, but they are slow and require the setup of a sandbox environment to test suspicious programs. In recent years, researchers have also tried a range of machine learning techniques to detect malware. These ML models have managed to make progress on some of the challenges of malware detection, including code obfuscation. But they present new challenges, including the need to learn too many features and a virtual environment to analyze the target samples. Binary visualization can redefine malware detection by turning it into a computer vision problem.


The New Wave of Web 3.0 Metaverse Innovations

The term “metaverse” was coined by science fiction writer Neal Stephenson in his book Snow Crash. He described a popular virtual world experienced in the first person by users equipped with augmented reality technology. This idea was taken a step further in Ready Player One by Ernest Cline. He defined it as a fully realized digital world that exists beyond the analog one in which we live. While this futuristic vision may seem far-fetched, the reality which is beginning to take shape is just as exciting. Hailed as the next iteration of the internet, the metaverse sits at the intersection of the web, augmented reality and the blockchain. Until recently, you could only experience the internet when you go to it through a browser or app. The metaverse, with its wide range of connectivity types, devices and technologies will allow us to experience the internet on a huge variety of devices — from commonplace screens and cell phones to VR (virtual reality) and AR (augmented reality). Ultimately, the metaverse will allow us to play games, shop, trade, chat, work or even attend concerts. 


Artificial Intelligence in Finance: Opportunities and Challenges

One of the crucial applications of machine learning in the financial industry is credit scoring. Many financial institutions, be it large banks or smaller fintech companies, are in the business of lending money. And to do so, they need to accurately assess the creditworthiness of an individual or another company. Traditionally, such decisions were made by analysts after conducting an interview with an individual and gathering the relevant data points. However, artificial intelligence allows for a faster and more accurate assessment of a potential borrower, using more complex methods in comparison to the scoring systems of the past. ... Given how inflation is affecting our savings and the fact that it is no longer profitable to keep the money in a savings account, more and more people are interested in passive investing. And this is exactly where robo-advisors come into play. They are wealth management services in which AI puts together portfolio recommendations based on the investors’ individual goals (both short- and long-term), risk preferences, and disposable income.


HowTo: Accelerate the Enterprise Journey to Passwordless

Adopting passwordless requires trust in authentication. The number one concern raised in conversations around passwordless is this: what happens when this new factor is compromised? The answer lies in the next set of security benefits from passwordless. Pair strong user authentication with device authentication. By configuring workflows with rules, correlation, and policies, at-risk authentications can be identified and blocked, such as people using suspicious or new devices. More mature approaches will include user behavior analytics. Consider a criminal who is cloning or spoofing a person’s biometrics. With device authentication, the adversary will also need to compromise the person’s phone and computer without being detected. With behavior analytics, the criminal will also need to open apps that the person normally uses during typical work hours — again, undetected. This increases the complexity required for an attack, increasing the organization’s likelihood of recognizing and responding before the attempt is successful. Increasing trust in authentication creates barriers for criminals. It reduces risk and enables us to investigate factors other than passwords.


Democratisation of AI is crucial to harmonising omnichannel customer experience

Today AI is merely a tool, but in the near future, AI will become a new corporate competency that is crucial to the delivery of a consistent CX through every customer touchpoint. This core competency is the ability to get real-time data from the market and execute real-time decisions. Adopting and using Business AI throughout the enterprise to automate business decisions will help companies develop this corporate competency. This is critically important to delivering a consistent CX because customer expectation is so ephemeral. Every intent signal, transaction data, customer interaction insight, real-time materials cost, market volatility, inflationary pressure, and even competitive moves can potentially change a customer’s expectation. Without AI it’s virtually impossible to keep up with the dynamics of customer expectation. While this new AI competency will be important for every business, it’s often cost restrictive to develop in-house. The teams, systems, and infrastructure required to test, manage, secure and maintain proprietary AI systems can oftentimes turn the deployment of AI into a full-blown R&D operation. 


Celebrating AI-infused talent management at the Eightfold conference

Achieving greater efficiency and scale is the most significant benefit HR teams say AI provides today. AI also enables companies to reduce turnover because it allows them to build employee career paths and present growth opportunities. When internal mobility is high and turnover is low, HR teams can focus their time and resources on scaling the organization. ...  AI can’t solve all the problems HR faces; however, it can provide contextual data and intelligence to help reframe a problem, so HR teams know what needs to be solved. Contextual intelligence is the goal, with AI supporting HR teams’ experience, insights, and intuition. ... Talent mobility, diversity, equity and inclusion, talent acquisition, talent management, and governance were the leading topics covered in the 33 sessions. Based on customer presentations, it’s clear Eightfold is concentrating on helping their customers accelerate and improve talent acquisition. Customers including Dexcom and Micron explained how they’re relying on Eightfold for each stage of talent acquisition, including sourcing, screening, interview scheduling, diversity hiring, candidate experience, candidate relationship management, and on-campus hiring.



Quote for the day:

"Confident and courageous leaders have no problems pointing out their own weaknesses and ignorance. " -- Thom S. Rainer

Daily Tech Digest - September 18, 2021

10 Steps to Simplify Your DevSecOps

Automation is key when balancing security integrations with speed and scale. DevOps adoption already focuses on automation, and the same holds true for DevSecOp. Automating security tools and processes ensures teams are following DevSecOps best practices. Automation ensures that tools and processes are used in a consistent, repeatable and reliable manner. It’s important to identify which security activities and processes can be completely automated and which require some manual intervention. For example, running a SAST tool in a pipeline can be automated entirely; however, threat modeling and penetration testing require manual efforts. The same is true for processes. A successful automation strategy also depends on the tools and technology used. One important automation consideration is whether a tool has enough interfaces to allow its integration with other subsystems. For example, to enable developers to do IDE scans, look for a SAST tool that has support for common IDE software. Similarly, to integrate a tool into a pipeline, review whether he tool offers APIs or webhooks or CLI interfaces that can be used to trigger scans and request reports.


Next-Generation Layer-One Blockchain Protocols Remove the Financial Barriers to DeFi & NFTs

The rapidly expanding world of DeFi is singlehandedly reshaping the global financial infrastructure as all manner of stocks, securities and transferable assets are slowly but surely being tokenized and stored in digital wallets. New protocols are arising daily that allow anyone with an internet connection or smartphone to access ecosystems that are equivalent to digital savings accounts but offer much more attractive yields than those found in the traditional banking sector. Unfortunately, with most of the top DeFi protocols currently operating on the Ethereum blockchain, the high cost of conducting transactions on the network has priced out ordinary individuals living in countries where even a $5 transaction fee is a significant amount of money capable of feeding a family for a week. This is where competing new blockchain platforms have the biggest opportunity for growth and adoption thanks to cross-chain bridges, a growing number of opportunities to earn a yield on new DeFi protocols and significantly smaller transaction cost.


The Dance Between Compute & Network In The Datacenter

In an ideal world, there is a balance between compute, network, and storage that allows for the CPUs to be fed with data such that they do not waste too much of their processing capacity spinning empty clocks. System architects try to get as close as they can to the ideals, which shift depending on the nature of the compute, the workload itself, and the interconnects across compute elements — which are increasingly hybrid in nature. We can learn some generalities from the market at large, of course, which show what people do as opposed to what they might do in a more ideal world than the one we all inhabit. We tried to do this in the wake of Ethernet switch and router stats and server stats for the second quarter being released by the box counters at IDC. We covered the server report last week, noting the rise of the single-socket server, and now we turn to the Ethernet market and drill down into the datacenter portion of it that we care about greatly and make some interesting correlations between compute and network.


ZippyDB: The Architecture of Facebook’s Strongly Consistent Key-Value Store

A ZippyDB deployment (named "tier") consists of compute and storage resources spread across several regions worldwide. Each deployment hosts multiple use cases in a multi-tenant fashion. ZippyDB splits the data belonging to a use case into shards. Depending on the configuration, it replicates each shard across multiple regions for fault tolerance, using either Paxos or async replication. A subset of replicas per shard is part of a quorum group, where data is synchronously replicated to provide high durability and availability in case of failures. The remaining replicas, if any, are configured as followers using asynchronous replication. Followers allow applications to have many in-region replicas to support low-latency reads with relaxed consistency while keeping the quorum size small for lower write latency. This flexibility in replica role configuration within a shard allows applications to balance durability, write performance, and read performance depending on their needs. ZippyDB provides configurable consistency and durability levels to applications, specified as options in read and write APIs. For writes, ZippyDB persists the data on a majority of replicas' by default. 


CISA, FBI: State-Backed APTs May Be Exploiting Critical Zoho Bug

The FBI, CISA and the U.S. Coast Guard Cyber Command (CGCYBER) warned today that state-backed advanced persistent threat (APT) actors are likely among those who’ve been actively exploiting a newly identified bug in a Zoho single sign-on and password management tool since early last month. At issue is a critical authentication bypass vulnerability in Zoho ManageEngine ADSelfService Plus platform that can lead to remote code execution (RCE) and thus open the corporate doors to attackers who can run amok, with free rein across users’ Active Directory (AD) and cloud accounts. The Zoho ManageEngine ADSelfService Plus is a self-service password management and single sign-on (SSO) platform for AD and cloud apps, meaning that any cyberattacker able to take control of the platform would have multiple pivot points into both mission-critical apps (and their sensitive data) and other parts of the corporate network via AD. It is, in other words, a powerful, highly privileged application which can act as a convenient point-of-entry to areas deep inside an enterprise’s footprint, for both users and attackers alike.


Algorithmic Thinking for Data Science

Generalizing the definition and implementation of an algorithm is algorithmic thinking. What this means is, if we have a standard of approaching a problem, say a sorting problem, in situations where the problem statement changes, we would not have to completely modify the approach. There will always be a starting point to attack the new problem set. That’s what algorithmic thinking does: it gives a starting point. ... Why is the calculation of time and space complexities important, now more than ever? It has to do with something we discussed earlier – the amount of data getting processed today. To explain this better, let us walk through a few examples that will showcase the importance of large amounts of data in algorithm building. The algorithms that we casually create for problem-solving in a classroom are very different from what the industry requires when the amount of data being processed is more than a million times what we deal with, in test scenarios. And time complexities are always seen in action when the input size is significantly larger.


Forget Microservices: A NIC-CPU Co-Design For The Nanoservices Era

Large applications hosted at the hyperscalers and cloud builders — search engines, recommendation engines, and online transaction processing applications are but three good examples — communicate using remote procedure calls, or RPCs. The RPCs in modern applications fan out across these massively distributed systems, and finishing a bit of work often means waiting for the last bit of data to be manipulated or retrieved. As we have explained many times before, the tail latency of massively distributed applications is often the determining factor in the overall latency in the application. And that is why the hyperscalers are always trying to get predictable, consistent latency across all communication across a network of systems rather than trying to drive the lowest possible average latency and letting tail latencies wander all over the place. The nanoPU research set out, says Ibanez, to answer this question: What would it take to absolutely minimize RPC median and tail latency as well as software processing overheads?


RESTful Applications in An Event-Driven Architecture

There are many use cases where REST is just the ideal way to build your applications/microservices. However, increasingly, there is more and more demand for applications to become real-time. If your application is customer-facing, then you know customers are demanding a more responsive, real-time service. You simply cannot afford to not process data in real-time anymore. Batch processing (in many modern cases) will simply not be sufficient. RESTful services, inherently, are polling-based. This means they constantly poll for data as opposed to being event-based where they are executed/triggered based on an event. RESTful services are akin to the kid on a road trip continuously asking you “are we there yet?”, “are we there yet?”, “are we there yet?”, and just when you thought the kid had gotten some sense and would stop bothering you, he asks again “are we there yet?”. Additionally, RESTful services communicate synchronously as opposed to asynchronously. What does that mean? A synchronous call is blocking, which means your application cannot do anything but wait for the response.


Application Security Tools Are Not up to the Job of API Security

With the advent of a microservice-based API-centric architecture, it is possible to test each of the individual APIs as they are developed rather than requiring a complete instance of an application — enabling a “shift left” approach allowing early testing of individual components. Because APIs are specified earliest in the SDLC and have a defined contract (via an OpenAPI / Swagger specification) they are ideally suited to a preemptive “shift left” security testing approach — the API specification and underlying implementation can be tested in a developer IDE as a standalone activity. Core to this approach is API-specific test tooling as contextual awareness of the API contract is required. The existing SAST/DAST tools will be largely unsuitable in this application — in the discussion on DAST testing to detect BOLA we identified the inability of the DAST tool to understand the API behavior. By specifying the API behavior with a contract the correct behavior can be enforced and verified enabling a positive security model as opposed to a black list approach such as DAST.


Microservice Architecture – Introduction, Challeneges & Best Practices

In a microservice architecture, we break down an application into smaller services. Each of these services fulfills a specific purpose or meets a specific business need for example customer payment management, sending emails, and notifications. In this article, we will discuss the microservice architecture in detail, we will discuss the benefits of using it, and how to start with the microservice architecture. In simple words, it’s a method of software development, where we break down an application into small, independent, and loosely coupled services. These services are developed, deployed, and maintained by a small team of developers. These services have a separate codebase that is managed by a small team of developers. These services are not dependent on each other, so if a team needs to update an existing service, it can be done without rebuilding and redeploying the entire application. Using well-defined APIs, these services can communicate with each other. The internal implementation of these services doesn’t get exposed to each other.



Quote for the day:

"Leadership cannot just go along to get along. Leadership must meet the moral challenge of the day." -- Jesse Jackson

Daily Tech Digest - September 17, 2021

How CISOs and CIOs should share cybersecurity ownership

While the CISO is responsible for various elements of cybersecurity day-to-day and forward planning, in most organizations, the buck often stops with the CIO, who reports to the CEO and the board of directors, Finch says. “As a result, the CIO cannot hand responsibility to the CISO entirely. Instead, they need to retain awareness of security strategy and ensure that it isn’t putting the organization’s overall strategy in danger—or vice versa.” Brad Pollard, CIO at Tenable, says today's CIOs have a range of security accountabilities founded in availability, performance, budget, and the timely delivery of projects. “CIOs enable and support every business unit within an organization. In doing so, they inherit the information security requirements for each business unit.” For example, the CISO may well be charged with defining security parameters such as service level agreements for vulnerability remediation or access controls, but it falls to the CIO to deliver on these requirements for all business units, spanning all the company’s technologies, Pollard says. 


A Guide to DataOps: The New Age of Data Management

In a data-driven competitive landscape, ignoring the benefits of data, or even the inability to extract its fullest potential, can only mean a disastrous end for organizations. To be sure, many of these organizations are collecting plenty of data. They just don’t want, know, or have the processes in place to use it. Part of the problem is legacy data pipelines. As data moves from source to target in the data pipeline, each stage has its own idea of what that data means and how it can be put to use. This disconnected view of data renders the data pipelines brittle and resistant to change, in turn making the organizations slow to react in the face of change. ... DataOps, short for data operationalization, is a collaborative data management approach that emphasizes communication, integration, and automation of data pipelines within organizations. Unlike data storage management, DataOps is not primarily concerned about ‘storing’ the data. It’s more concerned about ‘delivery’, i.e., making the data readily available, accessible, and usable for all the stakeholders. 


How To Reduce Context Switching as a Developer

Often, developers struggle to balance timely communication and context switching. As we already know, context switching has a negative impact on your productivity because it prevents you from reaching a deep state of work. On the other hand, when colleagues ask a question, you want to help them promptly. For example, a developer asks for your assistance and might be blocked if you don’t help him. But should you sacrifice your flow state to help your colleague? Well, the answer is somewhat divided. Try to find a balance between responding on time and prioritising your work. Asynchronous communication has become a popular approach to tackle this problem. Instead of calling a meeting for each problem, communicate with the involved people and resolve it via text-based communication such as Slack. Moreover, it would help if you blocked time in your calendar to reach a flow state and leave time slots open for meetings or handling questions from colleagues. For instance, you can block two slots of three hours of deep work and leave two slots of one hour for asynchronous communication.


Stop Using CSVs for Storage — Pickle is an 80 Times Faster Alternative

Storing data in the cloud can cost you a pretty penny. Naturally, you’ll want to stay away from the most widely known data storage format — CSV — and pick something a little lighter. That is, if you don’t care about viewing and editing data files on the fly. ... In Python, you can use the pickle module to serialize objects and save them to a file. You can then deserialize the serialized file to load them back when needed. Pickle has one major advantage over other formats — you can use it to store any Python object. That’s correct, you’re not limited to data. One of the most widely used functionalities is saving machine learning models after the training is complete. That way, you don’t have to retrain the model every time you run the script. I’ve also used Pickle numerous times to store Numpy arrays. It’s a no-brainer solution for setting checkpoints of some sort in your code. Sounds like a perfect storage format? Well, hold your horses.


Data Management Strategy Is More Strategic than You Think

CXOs are in some ways the most visible representative inside large enterprises of what is, after all, a deeply felt human need to make sense of the world. We try to accomplish this in all parts of our lives including in our professional careers. It’s far more satisfying emotionally to work in an organization that uses data effectively to chart the way forward. But there are some pressing, contemporary drivers of urgency, too, not just an inherent human need. The pandemic radically accelerated awareness of data’s importance for both social and commercial resilience, especially in the face of repeated supply chain shocks and disruptions. But there was another factor too: most of the enterprise world has taken to working from home, operating complex orgs from the relative safety of social isolation, despite the additional challenges such isolation creates. The future of work has become problematized across the enterprise world and that raises questions and urgency around the future of information strategies to support the future of work.


UN Calls For Moratorium On Artificial Intelligence Tech That Threatens Human Rights

The report, which was called for by the UN Human Rights Council, looked at how countries and businesses have often hastily implemented AI technologies without properly evaluating how they work and what impact they will have. The report found that AI systems are used to determine who has access to public services, job recruitment and impact what information people see and can share online, Bachelet said. Faulty AI tools have led to people being unfairly denied social security benefits, while innocent people have been arrested due to flawed facial recognition. "The risk of discrimination linked to AI-driven decisions –- decisions that can change, define or damage human lives –- is all too real," Bachelet said. The report highlighted how AI systems rely on large data sets, with information about people collected, shared, merged and analysed in often opaque ways. The data sets themselves can be faulty, discriminatory or out of date, and thus contribute to rights violations, it warned. For instance, they can erroneously flag an individual as a likely terrorist. The report raised particular concern about the increasing use of AI by law enforcement, including as forecasting tools.


How Do Authentication and Authorization Differ?

As a user, you can usually see authentication happening (although it might be persistent, like staying logged into a website even if you close the browser tab) and you can often do things like changing your password or choosing which second factor you want to use. Users can’t change their authorization options and won’t see authorization happening. But you might see another authentication request if you try to do something that’s considered important enough that your identity has to be verified again before you are authorized to do it. Some banks will let you log in to your account and make payments you’ve done previously with your username and password, but ask you to use 2FA to set up a new payee. Conversely, authentication systems that use conditional access policies can recognize that you’re using the same device, IP address, location and network connection to access the same file share you access from that device, location and network every day to improve your productivity and not make you go through an authentication challenge.


Data Loss Protection Could Be Industry’s Next Big Trend

Data is an organisation’s most precious asset, and understanding the various data states can help determine the best security measures to deploy. Technology like DLP gives both IT and security personnel an overall perspective on the location, distribution, and use of information within an organisation. It eliminates all possibilities of data theft, including fines and lost income. If you’re worried about your upcoming audit and want to keep your data compliant with complex regulations, DLP is a great option for you. For companies wanting to protect their sensitive data from security breaches caused by increased worker mobility and the development of novel channels, the technology is a godsend. For DLP, success with cloud and virtual models has opened up new possibilities. Using business principles, these software tools identify and protect confidential and sensitive data, preventing unaccredited end-users from disclosing information that could endanger the firm.


Cloud Native Driving Change in Enterprise and Analytics

There is a democratization underway of data embedded into workflows and Slack, he said, but being able to expose data from applications or natively integrated in applications is the province of developers. Tools exist, Stanek said, for developers to make such data analytics more accessible and understandable by users. “We want to help people make decisions,” he said. “We also want to get them data at the right time, with the right context and volume.” Stanek said he sees more developers owning business applications, insights, and intelligence up to the point where end users can make decisions. “This industry is heading away from an isolated industry where business people are copying data into visualization tools and data preparation tools and analytics tools,” he said. “We are moving into a world where we will be providing all of this functionality as a headless functionality.” The rise of headless compute services, which do not have local keyboards, monitors, or other means of input and are controlled over a network, may lead to different composition tools that allow business users to build their own applications with low-code/no-code resources, Stanek said.


Is the Net Promoter Score ripe for replacement?

How can businesses measure the success of their marketing efforts? How does their current and future performance benchmark against competitors? How can they work out, for example, the levels of satisfaction and loyalty felt by their customers? The rise of social media during the last decade has simultaneously made these questions easier and in many ways more difficult to answer. On the one hand, the internet is bristling with all the necessary data required to determine how a given business is performing, as customers willingly – even eagerly – share thoughts and opinions which provide insights into such vital issues as customer satisfaction. On the other hand, the sheer volume of the data available can make it challenging to separate the essential from the non-essential. ... Clearly, then, the stage is set for new ways to measure performance: methods which are up-to-the-minute, capable of leveraging AI and machine learning technology to sift through swathes of data, and able to articulate actionable KPIs in a simple and accessible format.



Quote for the day:

"You think you can win on talent alone? Gentlemen, you don't have enough talent to win on talent alone." -- Herb Brooks, Miracle

Daily Tech Digest - September 16, 2021

Zero Trust Requires Cloud Data Security with Integrated Continuous Endpoint Risk Assessment

Most of us are tired of talking about the impact of the pandemic, but it was a watershed event in remote working. Most organizations had to rapidly extend their existing enterprise apps to all their employees, remotely. And since many have already embraced the cloud and had a remote access strategy in place, typically a VPN, they simply extended what they had to all users. CEO's and COO's wanted this to happen quickly and securely, and Zero Trust was the buzzword that most understood as the right way to make this happen. So vendors all started to explain how their widget enabled Zero Trust or at least a part of it. But remember, the idea of Zero Trust was conceived way back in 2014. A lot has changed over the last seven years. Apps and data that have moved to the cloud do not adhere to corporate domain-oriented or file-based access controls. Data is structured differently or unstructured. Communication and collaboration tools have evolved. And the endpoints people use are no longer limited to corporate-issued and managed domain-joined Windows laptops.


What We Can Learn from the Top Cloud Security Breaches

Although spending on cybersecurity grew 10% during 2020, this increase fell far short of accelerated investments in business continuity, workforce productivity and collaboration platforms. Meanwhile, spending on cloud infrastructure services was 33% higher than the previous year, spending on cloud software services was 20% higher, and there was a 17% growth in notebook PC shipments. In short, cybersecurity spending in 2020 did not keep up with the pace of digital transformation, creating even greater gaps in organizations’ ability to effectively address the security challenges introduced by public cloud infrastructure and modern containerized applications: complex environments, fragmented stacks and borderless infrastructure, not to mention the unprecedented speed, agility and scale. See our white paper, Introduction to Cloud Security Blueprint, for a detailed discussion of cloud security challenges, with or without a pandemic. In this blog post, we look at nine of the biggest cloud breaches of 2020, where “big” is not necessarily the number of data records actually compromised but rather the scope of the exposure and potential vulnerability.


When is AI actually AI? Exploring the true definition of artificial intelligence

Whatever the organisation, consumers insist on seeing instant results – with personalisation being ever more important. If this isn’t happening, businesses will start seeing ‘drop off’ as customers seek an alternative, which, in today’s competitive market, could prove disastrous. There is an opportunity now for businesses to combat this by implementing true, bespoke AI models that can sift through vast amounts of data and make its own intelligent decisions. After all, the amount of data being generated across the globe is skyrocketing, and organisations are continuing to share their data with one another – so organisation and analysis at this level is a must. However, it’s important to note that AI isn’t for everyone. The move to AI is a huge leap, so businesses must consider whether they actually need AI to achieve their goals. In some cases, investing in advanced analytics and insights is sufficient to help a business run, grow and create value. So, if advanced analytics does the job, why invest in AI? Most AI projects fail because there is no real adoption after the initial proof of concept. 


How DevOps teams are using—and abusing—DORA metrics

DORA stands for DevOps Research and Assessment, an information technology and services firm founded founded by Gene Kim and Nicole Forsgren. In Accelerate, Nicole, Gene and Jez Humble collected and summarized the outcomes many of us have seen when moving to a continuous flow of value delivery. They also discussed the behaviors and culture that successful organizations use and provide guidance on what to measure and why. ... Related to this is the idea of using DORA metrics to compare delivery performance between teams. Every team has its own context. The product is different with different delivery environments and different problem spaces. You can track team improvement and, if you have a generative culture, show teams how they are improving compared to one another, but stack-ranking teams will have a negative effect on customer and business value. Where the intent of the metrics is to manage performance rather than track the health of the entire system of delivery, the metrics push us down the path toward becoming feature factories.


Intel AI Team Proposes A Novel Machine Learning (ML) Technique, MERL

What is unique about their design is that it allows all learners to contribute to and draw from a single buffer at the same time. Each learner had access to everyone else’s experiences, which aided its own exploration and made it significantly more efficient at its own task. The second group of agents, dubbed actors, was tasked with combining all of the little movements in order to achieve the broader goal of prolonged walking. Since these agents were rarely close enough to register a reward, the team used a genetic algorithm, a technique that simulates biological evolution through natural selection. Genetic algorithms start with possible solutions to a problem and utilize a fitness function to develop the best answer over time. They created a set of actors for each “generation,” each with a unique method for completing the walking job. They then graded them according to their performance, keeping the best and discarding the others. The following generation of actors was the survivors’ “offspring,” inheriting their policies.


Backend For Frontend Authentication Pattern with Auth0 and ASP.NET Core

The Backend For Frontend (a.k.a BFF) pattern for authentication emerged to mitigate any risk that may occur from negotiating and handling access tokens from public clients running in a browser. The name also implies that a dedicated backend must be available for performing all the authorization code exchange and handling of the access and refresh tokens. This pattern relies on OpenID Connect, which is an authentication layer that runs on top of OAuth to request and receive identity information about authenticated users. ... Visual Studio ships with three templates for SPAs with an ASP.NET Core backend. As shown in the following picture, those templates are ASP.NET Core with Angular, ASP.NET Core with React.js, and ASP.NET Core with React.js and Redux, which includes all the necessary plumbing for using Redux. ... The authentication middleware parses the JWT access token and converts each attribute in the token as a claim attached to the current user in context. Our policy handler uses the claim associated with the scope for checking that the expected scope is there


REvil/Sodinokibi Ransomware Universal Decryptor Key Is Out

While Bitdefender isn’t able to share details about the key, given the fact that the firm mentioned a “trusted law enforcement partner,” Boguslavskiy conjectured that Bitdefender likely “conducted an advanced operation on REvil’s core servers and infrastructures with or for European law enforcement and was somehow able to reconstruct or obtain the master key.” Using the key in a decryptor will unlock any victim, he said, “unless REvil redesigned their entire malware set.” But even if the reborn REvil did redesign the original malware set, the key will still be able to unlock victims that were attacked prior to July 13, Boguslavskiy said. Advanced Intel monitors the top actors across all underground discussions, including on XSS, ​​a Russian-language forum created to share knowledge about exploits, vulnerabilities, malware and network penetration. So far, the intelligence firm hasn’t spotted any substantive discussion about the universal key on these underground forums. Boguslavskiy did note, however, that the administrator of XSS has been trying to shut down discussion threads, since they “don’t see any use in the gossip.”


What to expect from SASE certifications

Secure access service edge (SASE) is a network architecture that rolls SD-WAN and security into a single, centrally managed cloud service that promises simplified WAN deployment, improved security, and better performance. According to Gartner, SASE’s benefits are transformational because it can speed deployment time for new users, locations, applications and devices as well as reduce attack surfaces and shorten remediation times by as much as 95%. ... The level one certification has twelve sections, and it takes about a day to complete. Level two has five stages, takes about half a day, and requires that applicants first complete level one. The training and testing are delivered on the Credly platform. “It integrates with LinkedIn, so it’s automatically shared on your LinkedIn profile,” Webber-Zvik says. As of Sept. 1, more than 1,000 people have earned level one certification, and they represent multiple levels of professional experience and job categories. Half are current Cato customers, and some of the rest may be considering going with Cato, says Dave Greenfield, Cato’s director of technology evangelism.


The difference between physical and behavioural biometrics, and which you should be using

The debate around digital identity has never been more important. The COVID-19 pandemic pushed us almost entirely online, with many businesses pivoting to become e-tailers almost overnight. Our reliance on online services – whether ordering a new bank card, getting your groceries delivered, or talking to friends – has given bad actors the perfect hunting ground. With the advent of the internet, the world moved online. However, authentication processes from the physical world were digitised rather than re-designed for the digital world. The processes businesses digitised lack security, are cumbersome and don’t preserve privacy. For example, the password: it is now 60 years old, yet still relied on today to protect our identities and data. Digitised processes have enabled the rise in online fraud, scams, social engineering, and synthetic identities. Our own research highlighted how a quarter of consumers globally receive more scam text messages than they get from friends and families, with over half (54%) of UK consumers stating that they trust organisations less after receiving a scam message.


Resetting a Struggling Scrum Team Using Sprint 0

It is hard to determine in Sprint 0 if you are done. There is a balance to strike between performing enough upfront planning and agreement to provide clarity and comfort, and taking significant time away from delivery to plan for every eventuality that could appear in the sprints that follow Sprint 0. After running these sessions, we entered our first delivery sprint in the hopes that the agreed ways of working would help us eliminate any challenges we found together. However, we encountered a few rocks that we had to navigate around on our path to quieter seas. One early issue that surfaced was that of the level of bonding within the team. Despite the new team members settling in well, and communication channels being agreed upon to help Robin and the others collaborate, it became clear that the developer group needed to build trust to work effectively. Silence was a big part of many planning and refinement ceremonies. This was not a team of strong extroverts, and I had concerns that the team was not comfortable speaking up.



Quote for the day:

"Leadership is the art of influencing people to execute your strategic thinking" -- Nabil Khalil Basma