Daily Tech Digest - August 31, 2021

LockFile Ransomware Uses Never-Before Seen Encryption to Avoid Detection

The ransomware first exploits unpatched ProxyShell flaws and then uses what’s called a PetitPotam NTLM relay attack to seize control of a victim’s domain, researchers explained. In this type of attack, a threat actor uses Microsoft’s Encrypting File System Remote Protocol (MS-EFSRPC) to connect to a server, hijack the authentication session, and manipulate the results such that the server then believes the attacker has a legitimate right to access it, Sophos researchers described in an earlier report. LockFile also shares some attributes of previous ransomware as well as other tactics—such as forgoing the need to connect to a command-and-control center to communicate–to hide its nefarious activities, researchers found. “Like WastedLocker and Maze ransomware, LockFile ransomware uses memory mapped input/output (I/O) to encrypt a file,” Loman wrote in the report. “This technique allows the ransomware to transparently encrypt cached documents in memory and causes the operating system to write the encrypted documents, with minimal disk I/O that detection technologies would spot.”


How To Prepare for SOC 2 Compliance: SOC 2 Types and Requirements

To be reliable in today’s data-driven world, SOC 2 compliance is essential for all cloud-based businesses and technology services that collect and store their clients’ information. This gold standard of information security certifications helps to ensure your current data privacy levels and security infrastructure to prevent any kind of data breach. Data breaches are all too common nowadays among small to large scale companies across the globe in all sectors. According to PurpleSec, half of all data breaches will occur in the United States by 2023. Experiencing such a breach causes customers to completely lose trust in the targeted company and those who have been through one tend to move their business elsewhere to protect their personal information in the future. SOC 2 compliance can protect from all this pain by improving customer trust in a company with secured data privacy policies. Companies that adhere to the gold standard-level principles of SOC 2 compliance, can provide this audit as evidence of secure data privacy practices. 


6 Reasons why you can’t have DevOps without Test Automation

Digital transformation is gaining traction every single day. The modern consumer is more demanding of quality products and services. Adoption of technologies helps companies stay ahead of the competition. They can achieve higher efficiency and better decision-making. Further, there is room for innovation that aims to meet the needs of customers. All these imply integration, continuous development, innovation, and deployment. All this is achievable with DevOps and the attendant test automation. But, can one exist without the other? We believe not; test automation is a critical component of DevOps, and we will tell you why. ... One of the biggest challenges with software is the need for constant updates. That is the only way to avoid glitches while improving upon what exists. But, the process of testing across many operating platforms and devices is difficult. DevOps processes must execute testing, development, and deployment in the right way. Improper testing can lead to low-quality products. Customers have so many options in the competitive business landscape. 


One Year Later, a Look Back at Zerologon

Netlogon is a protocol that serves as a channel between domain controllers and machines joined to the domain, and it handles authenticating users and other services to the domain. CVE-2020-1472 stems from a flaw in the cryptographic authentication scheme used by the Netlogon Remote Protocol. An attacker who sent Netlogon messages in which various fields are filled with zeroes could change the computer password of the domain controller that is stored in Active Directory, Tervoort explains in his white paper. This can be used to obtain domain admin credentials and then restore the original password for the domain controller, he adds. "This attack has a huge impact: it basically allows any attacker on the local network (such as a malicious insider or someone who simply plugged in a device to an on-premises network post) to completely compromise the Windows domain," Tervoort wrote. "The attack is completely unauthenticated: the attacker does not need any user credentials." Another reason Zerologon appeals to attackers is it can be plugged into a variety of attack chains.


Forrester: Why APIs need zero-trust security

API governance needs zero trust to scale. Getting governance right sets the foundation for balancing business leaders’ needs for a continual stream of new innovative API and endpoint features with the need for compliance. Forrester’s report says “API design too easily centers on innovation and business benefits, overrunning critical considerations for security, privacy, and compliance such as default settings that make all transactions accessible.” The Forrester report says policies must ensure the right API-level trust is enabled for attack protection. That isn’t easy to do with a perimeter-based security framework. Primary goals need to be setting a security context for each API type and ensuring security channel zero-trust methods can scale. APIs need to be managed by least privileged access and microsegmentation in every phase of the SDLC and continuous integration/continuous delivery (CI/CD) Process. The well-documented SolarWinds attack is a stark reminder of how source code can be hacked and legitimate program executable files can be modified undetected and then invoked months after being installed on customer sites.


The consumerization of the Cybercrime-as-a-Service market

Many trends in the cybercrime market and shadow economy mirror those in the legitimate world, and this is also the case with how cybercriminals are profiling and targeting victims. The Colonial Pipeline breach triggered a serious reaction from the US government, including some stark warnings to criminal cyber operators, CCaaS vendors and any countries hosting them, that a ransomware may lead to a kinetic response or even inadvertently trigger a war. Not long after, the criminal gang suspected to be behind the attack resurfaced under a new name, BlackMatter, and advertised that they are buying access from brokers with very specific criteria. Seeking companies with revenue of at least 100 million US dollars per year and 500 to 15,000 hosts, the gang offered $100,000, but also provided a clear list of targets they wanted to avoid, including critical infrastructure and hospitals. It’s a net positive if the criminals actively avoid disrupting critical infrastructure and important targets such as hospitals. 


NGINX Commits to Open Source and Kubernetes Ingress

Regarding NGINX’s open source software moving forward, Whiteley said the company’s executives have committed to a model where open source will be meant for use in production and nothing less. Whiteley even said that, if they were able to go back in time, certain features currently available only in NGINX Plus would be available in the open source version. “One model is ‘open source’ equals ‘test/dev, ‘ ‘commercial’ equals ‘production,’ so the second you trip over into production, you kind of trip over a right-to-use issue, where you then have to start licensing the technology ...” said Whiteley. “What we want to do is focus on, as the application scales — it’s serving more traffic, it’s generating more revenue, whatever its goal is as an app — that the investment is done in lockstep with the success and growth of that.” This first point, NGINX’s stated commitment to open source, serves partly as background for the last point mentioned above, wherein NGINX says it will devote additional resources to the Kubernetes community, a move partly driven by the fact that Alejandro de Brito Fonte, the founder of the ingress-nginx project, has decided to step aside.


How RPA Is Changing the Way People Work

Employees are struggling under the burden of routine, repetitive work but notice the consumers demanding better services and products. Employees expect companies to improve the working environment in the same spirit as improving customer satisfaction. The corporate response, in the form of automation, is expanding the comfort zone of employees. But, there’s a flip side to the RPA coin. With the rise of automation, people fear the consequences of RPA solutions replacing human labor and marginalizing the human touch that was at the core of services and product delivery. Such a threat could seem an exaggeration. RPA removes the drudgery of routine work and sets the stage for workers to play a more decisive role in areas where human touch, care, and creativity are essential. ... With loads of time and better tools at their disposal, employees are more caring and sensitive to the need for making a difference in the lives of customers. More employees are actively unlocking their reservoir of creativity. 


Predicting the future of the cloud and open source

The open source community has also started to dedicate time and effort to resolving some of the world’s most life-threatening challenges. When the COVID-19 pandemic hit, the open source community quickly distributed data to create apps and dashboards that could follow the evolution of the virus. Tech leaders like Apple and Google came together to build upon this technology to provide an open API that could facilitate the development of standard and applications by health organisations around the world, and open hardware designs for ventilators and other critical medical equipment that was in high demand. During lockdown last year, the open source community also launched projects to tackle the climate crisis an increasingly important issue that world leaders are under ever-more pressure to address. One of the most notable developments was the launch of the Linux Foundation Climate Finance Foundation, which aims to provide more funding for game-changing solutions through open source applications.


Pitfalls and Patterns in Microservice Dependency Management

Running a product in a microservice architecture provides a series of benefits. Overall, the possibility of deploying loosely coupled binaries in different locations allows product owners to choose among cost-effective and high-availability deployment scenarios, hosting each service in the cloud or in their own machines. It also allows for independent vertical or horizontal scaling: increasing the hardware resources for each component, or replicating the components which has the benefit of allowing the use of different independent regions. ... Despite all its advantages, having an architecture based on microservices may also make it harder to deal with some processes. In the following sections, I'll present the scenarios I mentioned before (although I changed some real names involved). I will present each scenario in detail, including some memorable pains related to managing microservices, such as aligning traffic and resource growth between frontends and backends. I will also talk about designing failure domains, and computing product SLOs based on the combined SLOs of all microservices. 



Quote for the day:

"Different times need different types of leadership." -- Park Geun-hye

Daily Tech Digest - August 30, 2021

4 signs your company has an innovation-minded culture

It’s important that your organization communicates its values clearly and executes tactics consistently. If your organization values a culture of innovation, communicate that importance while putting your plan into action. This commitment might require you to divert some resources from production at times, but it’s an incredibly worthwhile investment. A workforce that feels valued will help you enjoy the impacts of innovation down the road. Career paths don’t happen in a straight line—by empowering people with tools, training, and resources, they’ll excel in their unique development journey and support a culture of innovation. To invest in our people, we created Zotec University, a learning development platform offering hundreds of custom learning journeys to help participants hone their skills. We also offer a performance development platform that places team members in control of their own career experiences. Creating a culture of innovation takes careful planning, purposeful decision-making, intentionality, and consistent communication. 


What is a chief technology officer? The exec who sets tech strategy

“As companies push to effectively drive technology transformation, we believe there is a very strong push to find technology leaders [who] bring experience and capabilities from hands-on leadership and stewardship of such activities,” Stephenson says. The CTO role naturally requires a strong knowledge of various technologies, and “real technology acumen, especially in the architecture, software, and technology strategy areas to address legacy technology challenges,” Stephenson says. Knowing how technology works is crucial, but it’s also important to be able to explain the business value of a particular technology to C-level colleagues who might not be technically inclined. It’s also vital to be able to see how technology fits with strategic business goals. “Technology vision coupled with strategic thinking beyond technology” is important, says Ozgur Aksakai, president of the Global CTO Forum, an independent, global organization for technology professionals. “There are a lot of technology trends that do not live up to their promises,” Aksakai says. 


Reasons to Opt for a Multicloud Strategy

It is like giving all the critical keys to one person. Do you know the dependency and expectations this creates? Huge. What if you carefully select the best of the best services from different cloud providers? It looks like a feasible solution, and this is how a multicloud strategy works. A multicloud strategy empowers and upgrades a company’s IT systems, performance, cloud deployment, cloud cost optimization and more. The multicloud approach presents a lot of options for the enterprise. For example, some services are more cost-effective from one provider at scale versus those from others. Multicloud avoids vendor lock-in by not depending on only one cloud provider, but by helping companies select the best breed of cloud services from different providers for application workloads. The multicloud pattern provides system redundancy that reduces the hazards of downtimes, if they occur. The multicloud strategy will help companies raise their security bar by selecting the best breed DevSecOps solutions. An organization that implements a multicloud strategy can raise the bar on security, disaster recovery capabilities and increased uptime.


How Blockchain Startups Transform Banking and Payments Industry

Payments industry today has been deeply impacted by the rise of blockchain technology and cryptocurrencies. The legacy system is built upon the inheritance of technologies dating to the advent of credit cards and interbank settlement developed in the mid-1900s for use in centralized, established financial institutions with both institutional as well as retail clients in the era when the post-war fiat money system was the only option for private financial representation. Upon the advent of blockchain technology and cryptocurrencies, it gradually became increasingly clear that the legacy system, while revolutionary in its early days, still is quite inefficient and is designed from the perspective of an institutional client. This leads to relatively limited access to financial services by the majority of the retail market segment. Especially retail clients in developing nations have been hit particularly hard with higher fees, longer processing times for transactions, more invasive and ineffective KYC/AML processes and limited access to technology and thus limited access to all types of financial services.


Cerebras Upgrades Trillion-Transistor Chip to Train ‘Brain-Scale’ AI

A major challenge for large neural networks is shuttling around all the data involved in their calculations. Most chips have a limited amount of memory on-chip, and every time data has to be shuffled in and out it creates a bottleneck, which limits the practical size of networks. The WSE-2 already has an enormous 40 gigabytes of on-chip memory, which means it can hold even the largest of today’s networks. But the company has also built an external unit called MemoryX that provides up to 2.4 Petabytes of high-performance memory, which is so tightly integrated it behaves as if it were on-chip. Cerebras has also revamped its approach to that data it shuffles around. Previously the guts of the neural network would be stored on the chip, and only the training data would be fed in. Now, though, the weights of the connections between the network’s neurons are kept in the MemoryX unit and streamed in during training. By combining these two innovations, the company says, they can train networks two orders of magnitude larger than anything that exists today. 


No-Code Automated Testing: Best Practices and Tools

No-code automated tests are usually at a system or application level, which makes creating a test suite more daunting. It is important not to become fixated on getting 100% test coverage from the get-go. 100% coverage is a great goal, but it can seem so far away when starting out. Instead, we should focus on getting a handful of test cases created and really understanding how the tools we select work. Becoming an expert in our tools is much more beneficial than creating dozens of tests in an unfamiliar tool. It can be tempting to focus on every use case all at once, but it is important to prioritize which use cases to target first. The reality of development and testing is that we may not be able to test every single use case. ... It can be tempting to exercise every nook and cranny of an application, but it is important to start with only the actions the user will take. For example, when testing a login form, it is important to test the fields visible to the user and the login button, since that is what the user will likely interact with in most cases. Testing the edge cases is important, but we should always start with the happy-path before moving onto edge cases.


9 Automated Testing Practices To Avoid Tutorial (Escape Pitfalls)

Most people spend way more time reading source code than writing it, so making your code as easy to read as possible is an excellent decision. It'll never read like Hemingway, but that doesn't mean it can't be readable to anyone but you. Yoni Goldberg considers this the Golden Rule for testing: one must instantly understand the test's intent. You will love yourself (and your team members will pat you on the back) for making your tests readable. When you read those same tests a year down the road, you won't be thinking, “What was I doing?” or “What was this test even for?” If you don't understand what a test is for, you obviously can't use it. And if you can't use a test, what value does it have to you or your team? ... If your new test relies on a successful previous test, you're asking for trouble. If the previous test failed or corrupted the data, any subsequent tests will likely fail or provide incorrect results. Isolating your tests will give you more consistent results, and accurate and consistent results will make your tests worthwhile.


Facilitate collaborative breakthrough with these moves

Vertical facilitation is common and seductive because it offers straightforward and familiar answers to these five questions. In this approach, both the participants and the facilitator typically give confident, superior, controlling answers to the five questions (i.e., they identify one way to reach their goals). In horizontal facilitation, by contrast, participants typically give defiant, defensive, autonomous answers, and the facilitator supports this autonomy. The vertical and horizontal approaches answer the five collaboration questions in opposite ways. In transformative facilitation, the facilitator helps the participants alternate between the two approaches. ... Often, when collaborating, each of the participants and the facilitator starts off with a confident vertical perspective: “I have the right answer.” Each person thinks, “If only the others would agree with me, then the group would be able to move forward together more quickly and easily.” But when members of the group take this position too far or hold it for too long and start to get stuck in rigid certainty, the facilitator needs to help them explore other points of view, a collaboration move I call inquiring. 


Private 5G: Tips on how to implement it, from enterprises that already have

The first rule is your private 5G is a user of your IP network, not an extension of it. Every location you expect to host private 5G cells and every site you expect will have some 5G features hosted will need to be on your corporate VPN, supported by the switches and routers you'd typically use. Since all three private-5G enterprises were using their 5G networks largely for IoT that was focused on some large facilities, that didn’t present a problem for them. It seems likely that most future private 5G adoption will fit the same model, so this rule should be easy to follow overall. The second rule is that 5G control-plane functions will be hosted on servers. 5G RAN and O-RAN control-plane elements should be hosted close to your 5G cells, and 5G core features at points where it's convenient to concentrate private 5G traffic. Try to use the same kind of server technology, the same middleware, and the same software source for all of this, and be sure you get high-availability features. Rule three is that 5G user-plane functions associated with the RAN should be hosted on servers, located with the 5G RAN control-plane features. 


5 DevSecOps open source projects to know

Properly securing a software supply chain involves more than simply doing a point-in-time scan as part of a DevSecOps CI/CD pipeline. With the help of a working partnership that includes Google, the Linux Foundation, Red Hat, and Purdue University, sigstore brings together a set of tools developers, software maintainers, package managers, and security experts can benefit from. It handles the digital signing, verification, and logs data for transparent auditing, making it safer to distribute and use any signed software. The goal is to provide a free and transparent chain of custody tracing service for everyone. This sigstore service will run as a non-profit, public good service to provide software signing. Cosign, which released its 1.0 version in July 2021, signs and verifies artifacts stored in Open Container Initiative (OCI) registries. It also includes underlying specifications for storing and discovering signatures. Fulcio is a Root Certificate Authority (CA) for code-signing certificates. It issues certificates based on an Open ID Connect (OIDC) email address.The certificates that Fulcio issues to clients in order for them to sign an artifact are short-lived. 



Quote for the day:

"Leadership, on the other hand, is about creating change you believe in." -- Seth Godin

Daily Tech Digest - August 29, 2021

What is Terraform and Where Does It Fit in the DevOps Process?

Terraform is rapidly revolutionizing the entire landscape of DevOps and boosting the efficiency of DevOps projects. Terraform shares the same “Infrastructure as Code (IAC)” approach as most DevOps technologies and tools such as Ansible. However, Terraform operates in a distinct manner that is unique in itself as it focuses primarily on the automation of the entire infrastructure itself. This necessarily means that your complete Cloud infrastructure including networking, instances, and IPs can be easily defined in Terraform. There are some crucial differences between how Terraform operates and how other comparable technologies get the job done. Terraform provides support for all major cloud providers and doesn’t restrict the users to a specific platform like other tools. Terraform also handles provisioning failures in a much better way than other comparable tools. It achieves this by marking the suspect resources and ultimately removing and re-provisioning those resources in the next execution cycle. This approach improves the failure handling mechanism to a great extent since the system doesn’t have to re-build all the resources including the ones that were successfully provisioned.


Why Blockchain-Based Cloud Computing Could Be the Future of IoT

With the adoption of IoT in more devices, it is also possible that data security threats such as hacking and data breaching increase significantly. So, to protect the IoT trending technology against such issues, blockchain technology comes into the picture. Blockchain networks are known to be more secure, cryptic, and reliable in terms of securing and keeping data safe. Thus, blockchain technology is also expanding along with the IoT to keep it safe. Generally, IoT is crucial to provide users a centralized network of devices. For instance, this centralized network is important to control home appliances, security sensors, or network adapters. Now, the IoT controller sends and receives the data from these devices to enable the wireless connection system. Currently, brands such as Samsung are manufacturing smart home appliances like air conditioners that can be connected to a simple mobile application. Moreover, Google’s Home device is also capable of controlling multiple devices with the voice command only. 


EXCLUSIVE Microsoft warns thousands of cloud customers of exposed databases

The vulnerability is in Microsoft Azure's flagship Cosmos DB database. A research team at security company Wiz discovered it was able to access keys that control access to databases held by thousands of companies. Wiz Chief Technology Officer Ami Luttwak is a former chief technology officer at Microsoft's Cloud Security Group. Because Microsoft cannot change those keys by itself, it emailed the customers Thursday telling them to create new ones. Microsoft agreed to pay Wiz $40,000 for finding the flaw and reporting it, according to an email it sent to Wiz. "We fixed this issue immediately to keep our customers safe and protected. We thank the security researchers for working under coordinated vulnerability disclosure," Microsoft told Reuters. Microsoft's email to customers said there was no evidence the flaw had been exploited. "We have no indication that external entities outside the researcher (Wiz) had access to the primary read-write key," the email said. “This is the worst cloud vulnerability you can imagine. It is a long-lasting secret,” Luttwak told Reuters. 


Linux 5.14 set to boost future enterprise application security

One of the ways that Linux users have had to mitigate those vulnerabilities is by disabling hyper-threading on CPUs and therefore taking a performance hit. “More specifically, the feature helps to split trusted and untrusted tasks so that they don’t share a core, limiting the overall threat surface while keeping cloud-scale performance relatively unchanged,” McGrath explained. Another area of security innovation in Linux 5.14 is a feature that has been in development for over a year and a half that will help to protect system memory in a better way than before. Attacks against Linux and other operating systems often target memory as a primary attack surface to exploit. With the new kernel, there is a capability known as memfd_secret () that will enable an application running on a Linux system to create a memory range that is inaccessible to anyone else, including the kernel. “This means cryptographic keys, sensitive data and other secrets can be stored there to limit exposure to other users or system activities,” McGrath said. 


4 Reasons Why Every Data Scientist Should Study Organizational Psychology

As data scientists we need to understand the psychology of our data sets in order to work with data effectively. We also need to motivate ourselves and others so that everyone is doing what it takes to deliver results on time and under budget. You might be a team leader or an executive that lead a data science team. There are many data science roles that require someone to lead others. If you are a data scientist in this role, understanding the psychology of your data scientists is essential for success as a team leader and executive. Organizational psychologists study topics such as leadership styles, group dynamics, motivation, and conflict resolution — all of which are important for any data scientist looking to lead a team. Setting well defined goals that your direct reports understand and allowing them to take ownership of their work are examples of strong leadership. Thus having a deeper understanding these psychology based concepts and putting them to use for your daily work would result in much more productive and having a more fulfilling work experience for you and the team.


5 keys that define leaders in a storm

There is no one who is invulnerable, and we all have a point that, when touched, takes us to that state where the most sensitive fibers appear. As much as you see leaders who show themselves to be powerful and almost indestructible, I work permanently with those people who, in intimacy with themselves, are exactly the same as any other. To work on accepting vulnerability: self-awareness, knowing who you are and being encouraged to go deep into diving into your inner aspects are the best tools. By doing so, you will strengthen your confidence and you will also know how to allow yourself the moments where it is not necessary to force yourself to be someone you are not and simply swim in your emotions, without repressing or hiding them. Limiting tendencies are accepted behaviors that you have about your emotional world. They feed on restrictive beliefs that, by making them true in yourself, you assume them as natural and real. Limiting tendencies are made up of a range of triggers against which you act automatically, which manifest themselves in the form of reactions that always lead you down the same path.


Quantum computers could read all your encrypted data. This 'quantum-safe' VPN aims to stop that

In other words, encryption protocols as we know them are essentially a huge math problem for hackers to solve. With existing computers, cracking the equation is extremely difficult, which is why VPNs, for now, are still a secure solution. But quantum computers are expected to bring about huge amounts of extra computing power – and with that, the ability to hack any cryptography key in minutes. "A lot of secure communications rely on algorithms which have been very successful in offering secure cryptography keys for decades," Venkata Josyula, the director of technology at Verizon, tells ZDNet. "But there is enough research out there saying that these can be broken when there is a quantum computer available at a certain capacity. When that is available, you want to be protecting your entire VPN infrastructure." One approach that researchers are working on consists of developing algorithms that can generate keys that are too difficult to hack, even with a quantum computer. This area of research is known as post-quantum cryptography, and is particularly sought after by governments around the world.


Essential Skills Every Aspiring Cyber Security Professional Should Have

As a cybersecurity professional, your job will revolve around technology and its many applications, regardless of the position you’re going to fill. Therefore, a strong understanding of the systems, networks, and software you’re going to work with is crucial for landing a good job in the field. Cybersecurity is an extremely complex domain, with many sub-disciplines, which means it’s virtually impossible to be an expert in all areas. That’s why you should choose a specialization and strive to assimilate as much knowledge and experience as possible in your specific area of activity. Earning a certificate of specialization is a good starting point. It’s good to have a general knowledge of other areas of cybersecurity, but instead of becoming a jack of all trades, you should focus on your specific domain if you want to increase your chances of success. Cybersecurity is all about protecting the company or organization you work for against potential cyber threats. This implies identifying vulnerabilities, improving security policies and protocols, eliminating cybersecurity risks, minimizing damages after an attack and constantly coming up with new solutions to avoid similar issues from happening again.


The Surprising History of Distributed Ledger Technology

The concept of a distributed ledger can be traced back as far as the times of the Roman Empire. As is now, the problem was how to achieve consensus on the data in a decentralized, distributed, and trustless manner. This problem is described as the Byzantine Generals’ Problem. The Byzantine Generals’ problem describes a scenario where a general plans to launch an attack. However, since the army is very dispersed, he or she does not have centralized control. The only way to succeed is if the Byzantine army launches a planned and synchronized attack, where any miscommunication can cause the offence to fail. The only way that the generals can synchronize a strike is by sending messages via messengers, which leads to several failure scenarios where different actors in the system behave dishonestly. Bitcoin solved the Byzantine Generals’ Problem by providing a unified protocol, called proof of work. The Generals problem described the main obstacle to massive, distributed processing and is the foundation for distributed ledger technology, where everyone must work individually to maintain a synchronized and distributed ledger.


The trouble with tools - Overcoming the frustration of failed data governance technologies

To explain, inside many organizations that claim to focus on data governance, the process is reliant on tools that produce a CSV of objects with no insight about where violations might exist. For example, they struggle to tell the difference between Personal Information (PI) and Personal Identifiable Information (PII). While most PI data doesn’t identify a specific person and isn’t as relevant to identifying governance violations, discovery tools still present that information to users, adding huge complexity to their processes and forcing them to revert to a manual process to filter what’s needed from what isn’t. Instead, it’s critical that organizations are able to view, classify and correlate data wherever it is stored, and do so from a single platform - otherwise, they simply can’t add value to the governance process. In the ideal scenario, effective governance tools will enable organizations to correlate their governance processes across all data sources to show where PII is being held, for example. The outputs then become much more accurate, so in a scenario where there are 10 million findings, users know with precision which of them are PII.



Quote for the day:

"It is the responsibility of leadership to provide opportunity, and the responsibility of individuals to contribute." -- William Pollard

Daily Tech Digest - August 28, 2021

Why scrum has become irrelevant

The purpose of the retrospective is just that: to reflect. We look at what worked, what didn’t work, and what kinds of experiments we want to try.Unfortunately, what it boils down to is putting the same Post-its of “good teamwork” and “too many meetings” in the same swim lanes as “what went well,” “what went wrong,” and “what we will do better.” ... Scrum is often the enemy of productivity, and it makes even less sense in the remote, post-COVID world. The premise of scrum should not be that one cookie cutter fits every development team on the planet. A lot of teams are just doing things by rote and with zero evidence of their effectiveness. An ever-recurring nightmare of standups, sprint grooming, sprint planning and retros can only lead to staleness. Scrum does not promote new and fresh ways of working; instead, it champions repetition. Let good development teams self-organize to their context. Track what gets shipped to production, add the time it took (in days!) after the fact, and track that. Focus on reality and not some vaguely intelligible burndown chart. Automate all you can and have an ultra-smooth pipeline. Eradicate all waste. 


Your data, your choice

These days, people are much more aware of the importance of shredding paper copies of bills and financial statements, but they are perfectly comfortable handing over staggering amounts of personal data online. Most people freely give their email address and personal details, without a second thought for any potential misuse. And it’s not just the tech giants – the explosion of digital technologies means that companies and spin-off apps are hoovering up vast amounts of personal data. It’s common practice for businesses to seek to “control” your data and to gather personal data that they don’t need at the time on the premise that it might be valuable someday. The other side of the personal data conundrum is the data strategy and governance model that guides an individual business. At Nephos, we use our data expertise to help our clients solve complex data problems and create sustainable data governance practices. As ethical and transparent data management becomes increasingly important, younger consumers are making choices based on how well they trust you will handle and manage their data.


How Kafka Can Make Microservice Planet a Better Place

Originally Kafka was developed under the Apache license but later Confluent forked on it and delivered a robust version of it. Actually Confluent delivers the most complete distribution of Kafka with Confluent Platform. Confluent Platform improves Kafka with additional community and commercial features designed to enhance the streaming experience of both operators and developers in production, at a massive scale. You can find thousands of documents about learning Kafka. In this article, we want to focus on using it in the microservice architecture, and we need an important concept named Kafka Topic for that. ... The final topic that we should learn about is before starting our stream processing project is Kstream. KStream is an abstraction of a record stream of KeyValue pairs, i.e., each record is an independent entity/event. In the real world, Kafka Streams greatly simplifies the stream processing from topics. Built on top of Kafka client libraries, it provides data parallelism, distributed coordination, fault tolerance, and scalability. It deals with messages as an unbounded, continuous, and real-time flow of records.


What Happens When ‘If’ Turns to ‘When’ in Quantum Computing?

Quantum computers will not replace the traditional computers we all use now. Instead they will work hand-in-hand to solve computationally complex problems that classical computers can’t handle quickly enough by themselves. There are four principal computational problems for which hybrid machines will be able to accelerate solutions—building on essentially one truly “quantum advantaged” mathematical function. But these four problems lead to hundreds of business use cases that promise to unlock enormous value for end users in coming decades. ... Not only is this approach inefficient, it also lacks accuracy, especially in the face of high tail risk. And once options and derivatives become bank assets, the need for high-efficiency simulation only grows as the portfolio needs to be re-evaluated continuously to track the institution’s liquidity position and fresh risks. Today this is a time-consuming exercise that often takes 12 hours to run, sometimes much more. According to a former quantitative trader at BlackRock, “Brute force Monte Carlo simulations for economic spikes and disasters can take a whole month to run.” 


Can companies build on their digital surge?

If digital is the heart of the modern organization, then data is its lifeblood. Most companies are swimming in it. Average broadband consumption, for example, increased 47 percent in the first quarter of 2020 over the same quarter in the previous year. Used skillfully, data can generate insights that help build focused, personalized customer journeys, deepening the customer relationship. This is not news, of course. But during the pandemic, many leading companies have aggressively recalibrated their data posture to reflect the new realities of customer and worker behavior by including models for churn or attrition, workforce management, digital marketing, supply chain, and market analytics. One mining company created a global cash-flow tool that integrated and analyzed data from 20 different mines to strengthen its solvency during the crisis. ... While it’s been said often, it still bears repeating: technology solutions cannot work without changes to talent and how people work. Those companies getting value from tech pay as much attention to upgrading their operating models as they do to getting the best tech. 


Understanding Direct Domain Adaptation in Deep Learning

To fill the gap between Source data (train data) and Target data (Test data) a concept called domain adaptation is used. It is the ability to apply an algorithm that is trained on one or more source domains to a different target domain. It is a subcategory of transfer learning. In domain adaptation, the source and target data have the same feature space but from different distributions, while transfer learning includes cases where target feature space is different from source feature space. ... In unsupervised domain adaptation, learning data contains a set of labelled source examples, a set of unlabeled source examples and a set of unlabeled target examples. In semi-supervised domain adaptation along with unlabeled target examples there, we also take a small set of target labelled examples And in supervised approach, all the examples are supposed to be labelled one Well, a trained neural network generalizes well on when the target data is represented well as the source data, to accomplish this a researcher from King Abdullah University of Science and Technology, Saudi Arabia proposed an approach called ‘Direct Domain Adaptation’ (DDA).



AI: The Next Generation Anti-Corruption Technology

Artificial intelligence, according to Oxford Insights, is the “next step in anti-corruption,” partially because of its capacity to uncover patterns in datasets that are too vast for people to handle. Humans may focus on specifics and follow up on suspected abuse, fraud, or corruption by using AI to discover components of interest. Mexico is an example of a country where artificial intelligence alone may not be enough to win the war. ... As a result, the cost of connectivity has decreased significantly, and the government is currently preparing for its largest investment ever. By 2024, the objective is to have a 4G mobile connection available to more than 90% of the population. In a society moving toward digital state services, the affordable connection is critical. The next stage is for the country to establish an AI strategy. The next national AI strategy will include initiatives such as striving toward AI-based solutions to offer government services for less money or introducing AI-driven smart procurement. In brief, Mexico aspires to be one of the world’s first 10 countries to adopt a national AI policy.


Introduction to the Node.js reference architecture, Part 5: Building good containers

Why should you avoid using reserved (privileged) ports (1-1023)? Docker or Kubernetes will just map the port to something different anyway, right? The problem is that applications not running as root normally cannot bind to ports 1-1023, and while it might be possible to allow this when the container is started, you generally want to avoid it. In addition, the Node.js runtime has some limitations that mean if you add the privileges needed to run on those ports when starting the container, you can no longer do things like set additional certificates in the environment. Since the ports will be mapped anyway, there is no good reason to use a reserved (privileged) port. Avoiding them can save you trouble in the future. ... A common question is, "Why does container size matter?" The expectation is that with good layering and caching, the total size of a container won't end up being an issue. While that can often be true, environments like Kubernetes make it easy for containers to spin up and down and do so on different machines. Each time this happens on a new machine, you end up having to pull down all of the components. 


Now is the time to prepare for the quantum computing revolution

We've proven that it can happen already, so that is down the line. But it's in the five- to 10-year range that it's going to take until we have that hardware available. But that's where a lot of the promises for these exponentially faster algorithms. So, these are the algorithms that will use these fault-tolerant computers to basically look at all the options available in a combinatorial matrix. So, if you have something like Monte Carlo simulation, you can try significantly all the different variables that are possible and look at every possible combination and find the best optimal solution. So, that's really, practically impossible on today's classical computers. You have to choose what variables you're going to use and reduce things and take shortcuts. But with these fault-tolerant computers, for significantly many of the possible solutions in the solution space, we can look at all of the combinations. So, you can imagine almost an infinite amount or an exponential amount of variables that you can try out to see what your best solution is.


Ragnarok Ransomware Gang Bites the Dust, Releases Decryptor

The gang is the latest ransomware group to shutter operations, due in part to mounting pressures and crackdowns from international authorities that already have led some key players to cease their activity. In addition to Avaddon and SyNack, two heavy hitters in the game — REvil and DarkSide – also closed up shop recently. Other ransomware groups are feeling the pressure in other ways. An apparently vengeful affiliate of the Conti Gang recently leaked the playbook of the ransomware group after alleging that the notorious cybercriminal organization underpaid him for doing its dirty work. However, even as some ransomware groups are hanging it up, new threat groups that may or may not have spawned from the previous ranks of these organizations are sliding in to fill the gaps they left. Haron and BlackMatter are among those that have emerged recently with intent to use ransomware to target large organizations that can pay million-dollar ransoms to fill their pockets. Indeed, some think Ragnarok’s exit from the field also isn’t permanent, and that the group will resurface in a new incarnation at some point.



Quote for the day:

"Leadership is a matter of having people look at you and gain confidence, seeing how you react. If you're in control, they're in control." -- Tom Laundry

Daily Tech Digest - August 26, 2021

New Passwordless Verification API Uses SIM Security for Zero Trust Remote Access

On the spectrum between passwords and biometrics lies the possession factor – most commonly the mobile phone. That's how SMS OTP and authenticator apps came about, but these come with fraud risk, usability issues, and are no longer the best solution. The simpler, stronger solution to verification has been with us all along – using the strong security of the SIM card that is in every mobile phone. Mobile networks authenticate customers all the time to allow calls and data. The SIM card uses advanced cryptographic security, and is an established form of real-time verification that doesn't need any separate apps or hardware tokens. However, the real magic of SIM-based authentication is that it requires no user action. It's there already. Now, APIs by tru.ID open up SIM-based network authentication for developers to build frictionless, yet secure verification experiences. Any concerns over privacy are alleviated by the fact that tru.ID does not process personally identifiable information between the network and the APIs. It's purely a URL-based lookup.


Cognitive AI meet IoT: A Match Made in Heaven

The progressive trends of Mobile edge computing and Cloudlets are diffusing edge-based intelligence in connected and more controlled enterprise systems. However, within the diversity of pervasive cyber-physical ecosystems, the autonomy of the discrete edge nodes would require gain in operational intelligence with minimum supervision. The emerging innovation in cognitive computational intelligence is revealing a great potential to introduce a contemporary soft computing-based algorithm, architectural rethinking, and progressive system design of the next generation of IoT systems. The cognitive IoT Systems crush the strong partition between the silos and interdependencies of software and hardware subsystems. The flexibility of the edge-native AI component is flexible enough to recognize the changes in the physical environment and dynamically adjust the analytical outcomes in real-time. As a result, the interaction between human-machine or machine to machine becomes more dynamic, interoperable, and contextual to the time and scope of any operation.


How the pandemic delivered the future of corporate cybersecurity faster

At some point it becomes untenable and inefficient to manage all these separate solutions. That point gets closer every day as teams have to deal with the complexities and identity management challenges of remote work. Siloed solutions also mean IT staff must monitor several different consoles and may not connect the dots when incidents are flagged on separate platforms. They also require complex and costly integration projects to get the functionality needed. And even then, they’ll likely still require manual oversight. Moving toward all-in-one security solutions can help replicate the sense of cohesion that once existed in on-premises network security along with new efficiencies. All-in-one solutions can share data across the different components, leading to better and more efficient function. And by adding new modules instead of products when new tools are needs, you eliminate the expense and complications of integration. Companies and individuals have already gotten used to paying for things like data, cloud storage and web hosting based on how much they use them.


OnePercent ransomware group hits companies via IceID banking Trojan

The OnePercent group's ransom note directs victims to a website hosted on the Tor anonymity network where they can see the ransom amount and contact the attackers via a live chat feature. The note also includes a Bitcoin address where the ransom must be paid. If victims do not pay or contact the attackers within one week, the group attempts to contact them via phone calls and emails sent from ProtonMail addresses. "The actors will persistently demand to speak with a victim company’s designated negotiator or otherwise threaten to publish the stolen data," the FBI said. "When a victim company does not respond, the actors send subsequent threats to publish the victim company’s stolen data via the same ProtonMail email address." The extortion has different levels. If the victim does not agree to pay the ransom quickly, the group threatens to release a portion of the data publicly and if the ransom is not paid even after this, the attackers threaten to sell the data to the REvil/Sodinokibi group to be auctioned off. Aside from the REvil connection, OnePercent might have been tied to other ransomware-as-a-service (RaaS) operations in the past too.


Why Agile Transformations Fail In The Corporate Environment

One key reason an Agile transformation will fail is when all the focus is concentrated in just one of the three circles above. It is imperative that we consider these three circles like a Venn diagram and regularly monitor our operating presence. Ideally, we want to operate in all three circles, but it is hard to find balance. Suppose we are working in the mindset and framework circles and trying to build a perfect product with perfect architecture. Spending too much time making things perfect, we are likely to miss the market window, or run into financial difficulties. Similarly, if we operate in the mindset and business agility circles, for example, it could be great for the short term to get a prototype to market quickly, but we will be drowning in technical debt in the long run. Or, imagine that we operate in the framework and business agility circle to build a perfect hotel for our customers — we could miss the fact that they really need a bed-and-breakfast, not a hotel, by not considering the mindset circle. All three perspectives are essential, so to maximize the efficiencies, we need to keep finding the balance.


How the tech sector can provide opportunities and address skills gaps in young people

After all, as far as technology is concerned, none of us are beyond the need for further training and development. The McKinsey Global Institute has recently suggested that as many as 357 million people will need to acquire new skills in the next decade due to the predicted rise of artificial intelligence and automation – skills that few, even in tech-adjacent industries, currently possess. Keeping this kind of projection firmly in mind helps us to remember that the acquisition of new and essential skills is an ongoing process for everyone. As such, employers should not discount those potential candidates who don’t necessarily come from a tech-heavy background. With robust on-the-job training processes and a supportive, inclusive approach towards IT talent, young workers who perhaps missed out on IT fundamentals at school or who chose to focus, for example, on humanities-based university courses can absolutely receive the same attention and prospects as those from a tech-heavy background. A recent government report on aspects of the skills gap has already uncovered an uplifting trend in this direction, with 57% of employers confident that they can find resources to train their employees.


A closer look at two newly announced Intel chips

Intel’s upcoming next-generation Xeon is codenamed Sapphire Rapids and promises a radical new design and gains in performance. One of its key differentiators is its modular SoC design. The chip has multiple tiles that appears to the system as a monolithic CPU and all of the tiles communicate with each other, so every thread has full access to all resources on all tiles. In a way it’s similar to the chiplet design AMD uses in its Epyc processor. By breaking the monolithic chip up into smaller pieces it’s easier to manufacture. In addition to faster/wider cores and interconnects, Sapphire Rapids has a new feature called Last Level Cache (LLC) that features up to 100MB of cache that can be shared across all cores, with up to four memory controllers and eight memory channels of DDR5 memory, next-gen Optane Persistent Memory, and/or High Bandwidth Memory (HBM). Sapphire Rapids also offers Intel Ultra Path Interconnect 2.0 (UPI), a CPU interconnect used for multi-socket communication. UPI 2.0 features four UPI links per processor with 16GT/s of throughput and supports up to eight sockets.


How to encourage healthy conflict: 8 tips from CIOs

We unpack ideas and differences, seeking to understand each other’s points of view and the experiential lens through which the issue(s) are being evaluated, and then work collaboratively in the spirit of best serving our customers (external and internal) to reach the best decision and path to resolution. In the end, and most importantly, we are a team; so, when we work through the conflict and land on a course of action or decision, we all align, rally, and go into full-on execution mode as one team, with one agenda. Recognize that each team member brings a unique set of experiences, ideas, and beliefs to every conversation and decision. As a leader, you need to be acutely aware of when and how team members engage in conflict and the behaviors that precede and follow such discussions. Encourage team members to participate and share their ideas; candidly and directly elicit their honest and important views on the matters, even when the topics may be challenging and the conflict intense, and especially if the team member may be more quiet or prone to avoid the heat of the debate. 


The Office Of Strategy In the Age Of Agility

Agile methods such as scrum, kanban and lean development have gone beyond the realm of product design and development to other organizational functions, such as customer engagement, employee motivation, and execution amid uncertainty. From the earliest Agile Manifesto, what we know are the following principles: 1) people over process and tools, 2) working prototypes over excessive documentation, 3) respond to change rather than follow a plan, and 4) customer collaboration over rigid contracts. However, in the realm of strategy, agile is often confused with adhocism, and that it would lead to more chaos than value. But as Jeff Bezos instructs us, when making strategy, one must focus on the long term, the things that will remain largely constant over time. In the case of Amazon, the strategy is three-fold: customer obsession, invention, and being patient, and that for customers what matters is greater speed, wider selection, and lower cost. With so few strategic priorities, how does the company manage to remain relevant? 


Microservice Architecture and Agile Teams

As services can be worked on in parallel, a team can bring more developers to bear on a problem without them getting into each other’s actions. It can also be simpler for those developers to understand their part of the system, as they can focus their concern on just one part of it. Process isolation also causes it feasible for us to alter the technology choices team makes, perhaps mixing different programming languages, programming styles, deployment platforms, or databases to discover the perfect blend. Microservice architecture does allow the team more concrete boundaries in a system around which ownership lines can be marked, allowing the team much more flexibility regarding how you reduce this problem. The microservice architecture enables each service to be developed independently by a team that is concentrated on that service. As a result, it produces continuous deployment possible for complex applications. The microservice architecture enables each service to be scaled individually. It has been observed when a team or organization adopts Microservice architecture the legitimate gain is the built-in agility an organization gets.



Quote for the day:

"When your values are clear to you, making decisions becomes easier." -- Roy E. Disney

Daily Tech Digest - August 25, 2021

Forrester exec on robotic process automation’s ‘defining point’

Building a good feedback loop will also be essential. Adding better analytics and process discovery can train the AIs and allow them to deliver better recommendations while also shouldering more of the load. “The integration of process mining, digital work analytics, and machine learning will, in the short term, help generate RPA scripts that mimic the capabilities of humans, and, in the long term, help design more-advanced human-machine or human-in-the-loop (HITL) interaction,” the report stated. Another challenge will be finding the best way to deliver and price the software. Cloud-based services are now common and customers have the choice between installing the software locally or relying upon cloud services managed by the vendor. Many vendors often price their software by the number of so-called “bots” assigned to particular tasks. The report imagines that more fine-grained precision will offer customers the ability to move to “consumption-based pricing” that better follows their usage. “This may be per minute or hour of robot time or per task executed, but it solves the problem of bot underutilization,” the report predicted.


Kubernetes hardening: Drilling down on the NSA/CISA guidance

Your security efforts shouldn’t stop at the pods. Networking within the cluster is also key to ensuring that malicious activities can’t occur, and if they do, they can be isolated to mitigate their impact. In addition to securing the control plane, key recommendations include using network policies and firewalls to both separate and isolate resources and encrypting traffic in motion and protecting sensitive data such as secrets at rest. One core way of doing this is taking advantage of the Kubernetes namespace native functionality. While three namespaces are built-in by default, you can create additional namespaces for your applications. Not only does the namespace construct provide isolation, but it can help use resource policies to limit storage and compute resources at the namespace level as well. This can prevent resource exhaustion, either by accident or maliciously, which can have cascading effect on the entire cluster and all its supported applications. While namespaces can help provide resource isolation, leveraging network policies can control the flow of traffic between the various components including pods, namespaces and external IP addresses.


Conductor: Why We Migrated from Kubernetes to Nomad

The first major issue we ran into related to this job type was the GKE autoscaler. As customers’ workload increased, we started to have incidents where pending jobs were piling up exponentially, but nothing was scaled up. After examining the Kubernetes source code, we realized that the default Kubernetes autoscaler is not designed for batch jobs, which typically have a low tolerance for delay. We also had no control over when the autoscaler started removing instances. It was set to 10 minutes as a static configuration, but the accumulated idle time increased our infrastructure cost as we could not rapidly scale down once there was nothing left to work on. We also discovered that the Kubernetes job controller, a supervisor for pods carrying out batch processes, was unreliable. The system would lose track of jobs and be in the wrong state. And there was another scalability issue. On the control plane side, there was no visibility into the size of the GKE clusters’ control plane.  As load increases, GKE would automatically scale up the control plane instances to handle more requests. 


Attackers Actively Exploiting Realtek SDK Flaws

“Specifically, we noticed exploit attempts to ‘formWsc’ and ‘formSysCmd’ web pages,” SAM’s report on the incident said. “The exploit attempts to deploy a Mirai variant detected in March by Palo Alto Networks. Mirai is a notorious IoT and router malware circulating in various forms for the last 5 years. It was originally used to shut down large swaths of the internet but has since evolved into many variants for different purposes.” The report goes on to link another similar attack to the attack group. On Aug. 6 Juniper Networks found a vulnerability that just two days later was also exploited to try and deliver the same Mirai botnet using the same network subnet, the report explained. “This chain of events shows that hackers are actively looking for command injection vulnerabilities and use them to propagate widely used malware quickly,” SAM said. “These kinds of vulnerabilities are easy to exploit and can be integrated quickly into existing hacking frameworks that attackers employ, well before devices are patched and security vendors can react.”


The difference between digitization, digitalization & digital transformation

Digitization is the process of changing from an analog to digital form, also known as digital enablement. In other words, digitization takes an analog process and changes it to a digital form without any different-in-kind changes to the process itself. ... Now, perhaps more disputed is the definition of digitalization. According to Gartner, we can define it as the use of digital technologies to change a business model and provide new revenue and value-producing opportunities. This means that businesses can start to use their digitized data. Through advanced technologies, businesses will be able to discover the potential of processed digital data and help them achieve their business goals. ... Finally, we are introduced to the concept of digital transformation. Here, Gartner states that digital transformation can refer to anything from IT modernization, for example, Cloud computing, to digital optimization, to the invention of new digital business models. Namely, this is the process of fully benefiting from the enormous digital potential in a business. 


Bootstrapping the Authentication Layer and Server With Auth0.js and Hasura

Hasura is a GraphQL engine for PostgreSQL databases. Hasura is also not the only available GraphQL engine. There are other solutions like Postgraphile and Prisma. However, after trying a few of them, I've come to appreciate Hasura for several reasons: Hasura is designed for client-facing applications and is one of the simplest solutions to set up. With Hasura, you get a production-level GraphQL server out-of-the-box that’s performant and has a built-in caching system; Powerful authentication engine that’s based on the RLS (Row Level Security) that allows building granular and complex permission systems; You can host Hasura on-premise using their Docker image, but you can also set up a working GraphQL server in a matter of minutes using Hasura cloud. This option is perfect for scaffolding your app and is the one we will use today; Hasura's dashboard is powerful and user-friendly. You can write and test your GraphQL queries, manage your database schema, add custom resolvers and create subscriptions, all from one place.


Why Work-From-Home IT Teams May Be at a Greater Risk for Burnout

Typical burnout indicators include a loss of interest, reduced productivity, and an inability to fully discharge their professional duties. “People may also experience high levels of exhaustion, stress, anxiety, and pessimism,” notes Joe Flanagan, senior employment advisor at online employment services provider VelvetJobs. Flanagan stated that burnout can also lead to, or trigger, other mental health issues. “Employers and managers should be trained and sensitized to identify these signs, and teams must have checks and balances to provide support to individuals who are at a higher risk,” he advises. Immediate action is necessary as soon as burnout is suspected in a team or a specific worker, Welch suggests. The solution may be as simple as extending a deadline or offering additional support. He also advises establishing communication channels, such as team video calls, which will allow colleagues to interact with each other, exchanging news, insights, and other types of chitchat. “Every team is different, so look for whatever works for the team,” Welch says.


Post-Brexit: how has data protection compliance changed?

While much of The European Union’s General Data Protection Regulations (GDPR) have been incorporated into UK law, it’s still important to consider what has changed in terms of how companies – particularly UK-based ones – ensure compliance to data protection regulations. It was argued in 2017 by Index Engines that GDPR puts personal data back in the hands of citizens. This raises the question: “Does this still apply?” No matter what has changed, one challenge will remain: organisations’ ability to find business and legal-critical information within their vast unstructured data stores. Then there are the decisions about when to delete and where to store it, when to modify and rectify it. This is a complex issue now involving multiple petabytes of data, and organisations have no real understanding of what their unstructured data contains. With this top of mind, there is arguably a need for Wide Area Network (WAN) acceleration to gain the ability to find and move data around at high speed by mitigating latency and packet loss. This works to provide quicker data access and retrieval.


What the US Army can teach us about building resilient teams

Science and stories are two of the best ways to defeat skepticism. Gen. Casey approached Dr. Seligman and his team at the University of Pennsylvania because it was one of the few known institutions that had conducted large-scale training on resilience and had published extensive peer-reviewed research in the area. It was also the only known entity that had extensive experience developing and implementing a resilience train-the-trainer model that had also been scientifically reviewed. ... Holistic programs have the power to inspire and transform an entire organization and those who work in it, and stories of transformation make the work come to life and help concepts stick. The last place I thought I would learn anything about vulnerability was with US Army drill sergeants. Yet I can speak personally about my own transformation working with them. I used to be someone who never talked about failure or my own challenges. It was too risky, especially when I was practicing law. But the soldiers helped me understand that talking about your obstacles isn’t a sign of weakness—it’s courageous and inspiring. Here are two examples.


How do you lead hybrid teams? 5 essentials

Transparency is often a leadership virtue in any type of organization, but it’s an absolute must for hybrid teams. It’s the basis for mutual trust and productivity when people aren’t consistently working together in the same location. This starts with a clear, highly visible method of setting goals and expectations – and a shared belief in how you’re tracking progress. “Leaders need to be transparent on a shared set of objectives and how they are measuring employee productivity,” says Thomas Phelps, CIO at Laserfiche. “For me, it’s not about how many hours you work or when you were last online.” ... Making broad assumptions about everyone’s shared understanding and experience is probably a bad idea in a hybrid work mode, for example. Make sure you’re checking in with people, listening to them, and making positive changes when they’re in order. Phelps says Laserfiche has been regularly soliciting employee feedback about current and future operational plans since the company’s pivot to fully remote/WFH last year. Nayan Naidu, head of DevOps and cloud engineering capability center at Altimetrik, likewise emphasizes the importance of transparently setting expectations and reinforcing them regularly. 



Quote for the day:

"It is, after all, the responsibility of the expert to operate the familiar and that of the leader to transcend it." -- Henry A. Kissinger