Daily Tech Digest - September 30, 2022

5 Signs That You’re a Great Developer

Programming directly changes your brain to work in another way, you’re starting to think more algorithmically and solve problems faster, so it really affects other aspects of your life Good programmers not only can learn anything else much faster, especially if we’re talking about tech-related directions, but also they are great examples of entrepreneurs and CEO. Look at Elon Musk, for instance, he was a programmer and built his own game when he was 12. ... As we briefly discussed previously, programming encourages creative thinking and teaches you how to approach problems in the most effective way. But in order to do so, you must first be able to solve a lot of these difficulties and have a passion for doing so; only then will you probably succeed as a developer. If you’ve just started and thought that easy, then you’re completely wrong. You just haven’t figured out genuinely challenging problems, and the more you learn, the more difficult and complex the difficulties get. Because you need to not only solve it, but also solve it in the most effective way possible, speed up your algorithm, and optimize everything. 


Experimental WebTransport over HTTP/3 support in Kestrel

WebTransport is a new draft specification for a transport protocol similar to WebSockets that allows the usage of multiple streams per connection. WebSockets allowed upgrading a whole HTTP TCP/TLS connection to a bidirectional data stream. If you needed to open more streams you’d spend additional time and resources establishing new TCP and TLS sessions. WebSockets over HTTP/2 streamlined this by allowing multiple WebSocket streams to be established over one HTTP/2 TCP/TLS session. The downside here is that because this was still based on TCP, any packets lost from one stream would cause delays for every stream on the connection. With the introduction of HTTP/3 and QUIC, which uses UDP rather than TCP, WebTransport can be used to establish multiple streams on one connection without them blocking each other. For example, consider an online game where the game state is transmitted on one bidirectional stream, the players’ voices for the game’s voice chat feature on another bidirectional stream, and the player’s controls are transmitted on a unidirectional stream. 
Software builders across Amazon require consistent, interoperable, and extensible tools to construct and operate applications at our peculiar scale; organizations will extend on our solutions for their specialized business needs. Amazon’s customers benefit when software builders spend time on novel innovation. Undifferentiated work elimination, automation, and integrated opinionated tooling reserve human interaction for high judgment situations. Our tools must be available for use even in the worst of times, which happens to be when software builders may most need to use them: we must be available even when others are not. Software builder experience is the summation of tools, processes, and technology owned throughout the company, relentlessly improved through the use of well-understood metrics, actionable insights, and knowledge sharing. Amazon’s industry-leading technology and access to top experts in many fields provides opportunities for builders to learn and grow at a rate unparalleled in the industry. As builders we are in a unique position to codify Amazon’s values into the technical foundations; we foster a culture of belonging by ensuring our tools, training, and events are inclusive and accessible by design.


The Troublemaker CISO: How Much Profit Equals One Life?

We take for granted that those who are charged with protecting us are doing so with our best interest at heart. There is no shaving off another few cents just to increase value to shareholders over the life of a person. Lucky for me, there is a shift in the boardrooms and governing bodies to see how socially responsible you are and whether you are acting in the best interest of the people and not just the investors. If the members of the board and governing body are considering these topics when steering a business, isn't it time to relook at how and why we do things? Are we as CISOs not accountable to leadership to impress on them the risk that IOT/internet connectivity poses to critical networks - and especially to healthcare? It is time to be firm in expressing the risk and saying we would rather spend a bit more money and time and do it the safe way. And this should be listed as the top risk in the company. The other big issue I have with this type of network being connected is one of transparency.


Digital Twins Offer Cybersecurity Benefits

A key difficulty, from a cybersecurity perspective, is the fact drug production lines are made up of multiple different technologies, running different operating systems that are often provided by different suppliers. “Integrating multiple systems from different suppliers can provide expanded attack surface that can be exploited by cyber adversary,” continues Mylrea. To address this, Mylrea and Grimes developed what they refer to as “biosecure digital twins”—replicas of manufacturing lines they use to identify potential points of attack for hackers. “The digital twin is essentially a high-fidelity virtual representation of critical manufacturing processes. From a security perspective, this improves monitoring, detection, and mitigation of stealthy attacks that can go undetected by most conventional cybersecurity defenses,” explains Mylrea. “Beyond security, the biosecure digital twin can optimize performance and productivity by detecting when critical systems deviate from their ideal state and correct in real time to enable predictive maintenance that prevent costly faults and safety failures.”


Unlocking cyber skills: This year’s essential back-to-school lesson plan

Technology is continually advancing, which will only create more avenues for cybersecurity roles in the future. While it’s essential to inform students about the types of careers in cybersecurity, teachers and career advisors should be aware of the skills and qualities the sector needs beyond technical computer and software knowledge. Once this is achieved, it can shed light on the roles students can go onto. Technical skills are critical in cybersecurity, yet they can be learned, fostered, and evolved throughout a student’s career. Schools need to tap into individual students’ strengths in hopes of encouraging them to pursue cyber positions. Broadly, cybersecurity enlists leaders, communicators, researchers, critical thinking… the list goes on. Having the qualities needed to fulfil various roles in the industry can position a student remarkably when they first start in the industry. Yet, this comes down to their mentors in high school being able to communicate that a student’s inquisitive nature or presenting skills can be applied to various sectors.


Data literacy: Time to cure data phobia

Data literacy is an incredibly important asset and skill set that should be demonstrated at all levels of the workplace. In simple terms, data literacy is the fundamental understanding of what data means, how to interpret it, how to create it and how to use it both effectively and ethically across business use cases. Employees who have been trained in and applied their knowledge of how to use company data demonstrate a high level of data literacy. Although many people have traditionally associated data literacy skills with data professionals and experts, it’s becoming necessary for employees from all departments and job levels to develop certain levels of data literacy. The Harvard Business Review stated: “Companies need more people with the ability to interpret data, to draw insights and to ask the right questions in the first place. These are skills that anyone can develop, and there are now many ways for individuals to upskill themselves and for companies to support them, lift capabilities,and drive change. Indeed, the data itself is clear on this: Data-driven decision-making markedly improves business performance.”


To BYOT & Back Again: How IT Models are Evolving

The growing complexity of IT frameworks is startling. A typical enterprise has upwards of 1,200 cloud services and hundreds of applications running at any given moment. On top of that, employees have their own smartphones, and many use their own routers and laptops. Meanwhile, various departments and groups -- marketing, finance, HR and others -- subscribe to specialized cloud services. The difficulties continue to pile up -- particularly as CIOs look to build out more advanced data and AI frameworks. McKinsey & Company found that between 10% and 20% of IT budgets are devoted to adding more technology in an attempt to modernize the enterprise and pay down technical debt. Yet, part of the problem, it noted, is “undue complexity” and a lack of standards, particularly at large companies that stretch across regions and countries. In many cases, orphaned and balkanized systems, data sprawl, data silos, and complex device management requirements follow. For CIOs seeking simplification and tighter security, the knee-jerk reaction is often to clamp down on choices and options.


IT leadership: What to prioritize for the remainder of 2022

To deliver product-centric value, it’s best to have autonomous, cross-functional teams running an Agile framework. Those teams can include technical practitioners, design thinkers, and business executives. Together, they can increase business growth by as much as 63%, Infosys’ Radar report uncovered. Cross-pollination efforts can spread Agile across the entire enterprise, building credibility and trust among high-level stakeholders toward an iterative process that can deliver meaningful, if incremental, business results. Big-bang rollouts, with a raft of modernizations released in one fell swoop, may seem attractive to management or other stakeholders. But they carry untold risk: developers scrambling to fix bugs after the fact, account teams working to retain disgruntled customers. Approach cautiously, and consider an Agile roadmap of smaller, iterative developments instead of the momentous release. It also breaks down the considerable task of application modernization into smaller, bite-sized chunks. 


How Policy-as-Code Helps Prevent Cloud Misconfigurations

Policy-as-code is a great cloud configuration solution because it eliminates the potential for human error and makes it more difficult for hackers to interfere. Policy compliance is crucial for cloud security, ensuring that every app and piece of code follows the necessary rules and conditions. The easiest way to ensure nothing slips through the cracks is to automate the compliance management process. Policy-as-code is also a good choice in a federated risk management model. A set of common standards are applied across a whole organization, although departments or units retain their own methods and workflows. PaC fits seamlessly into this high-security system by scaling and automating IT policies throughout a company. Preventing cloud misconfiguration relies on effectively ensuring every app and line of code is adhering to an organization’s IT policies. PaC offers some key benefits that make this possible without being a hassle. Policy-as-code improves the visibility of IT policies since everything is clearly defined in code format. 



Quote for the day:

"Without courage, it doesn't matter how good the leader's intentions are." -- Orrin Woodward

Daily Tech Digest - September 29, 2022

Hybrid work is the future, and innovative technology will define it

We’re starting to see an amplification of recognition tools, of coaching platforms, of new and exciting ways to learn that are leveraging mobility and looking at how people want to work and to meet them where they are, rather than saying, “Here’s the technology, learn how to use it.” It’s more about, “Hey, we’re learning how you want to work and we’re learning how you want to grow, and we’ll meet you there.” We’re really seeing an uptake in the HR tech space of tools that acknowledge the humaneness underneath the technology itself. ... The second layer consists of the business applications we’ve come to know and love. Those include HR apps, business applications, supply chain applications, and financial applications, et cetera. Certainly, there is a major role in this distributed work environment for virtual application delivery and better security. We need to access those mission-critical apps remotely and have them perform the same way whether they’re virtual, local, or software as a service (SaaS) apps -- all through a trusted access security layer.


Web 3: How to prepare for a technological revolution

It hardly needs to be said, but over the last few decades, the internet has grown to be arguably the most integral cog ensuring a smooth-running, functioning society. It is so ingrained that almost every industry in the world would be unable to function properly without it. And this reliance will only grow as Web 3 becomes the norm, which makes it critical that we begin to educate children now on its uses and how to navigate it. Already, many of today’s adults will find it difficult to explain what Web 3 is, let alone teach the next generation how to use it. Educating children early will not only help them thrive in the future, but they will also be able to pass gained knowledge up the chain to their parents. This is, of course, just history repeating itself. It is the equivalent of kids showing their parents how to use a touch screen or work their email. But the revolution Web3 is about to bring is on a different scale to any previous technological advancement. Soon, the greatest opportunities will be solely available on the new internet, and it is critical we ensure every child has the opportunity to succeed.


Health data governance and the case for regulation

Without appropriate data governance procedures and training in place, healthcare organizations are likely to find themselves in danger of noncompliance. HIPAA violations in particular can occur at any level of an organization; if an undertrained staff member or unsecured database is operating in your organization, there’s a huge likelihood that they will eventually misuse patient data and breach HIPAA regulations. This kind of breach can lead to noncompliance, fines, legal issues, poorer patient experiences and even a loss of trust within the greater medical community. Data governance means the difference between a successful and fully operational facility and a facility that gets shut down by the government. On the other hand, when data governance principles are applied successfully in the healthcare sector, a slew of benefits outside of basic compliance can be realized. Patients feel confident that their information is safe and begin to refer their friends and family members to your network. Data becomes easier to find, label and organize for new operational use cases and emerging patient technologies. 


Closing the Gap Between Complexity and Performance in Today’s Hybrid IT Environments

Nowadays, the increasing need for security on all fronts has fueled collaboration between teams on a regular basis. This, in turn, has spurred more proactivity from an internal IT operations perspective. Proactivity, bolstered by a unified view into traffic and communication, is a key aspect of closing the gap between cloud complexity and performance — because it starts at the IT cultural level. Technical capabilities like deep observability can support team prioritization of detection and management on a more holistic level, addressing all aspects of IT infrastructure. With this, organizations can feel more confident in overcoming cloud-based challenges and mitigating connected cyber vulnerabilities as a collective force. An all-encompassing, proactive approach is needed to speedily detect cyber threats, respond to the corresponding activity, and enact a remediation plan. Within hybrid and multi-cloud environments, data and communication costs can skyrocket. The most common use cases stem from packets, which can interfere with control of and visibility into the right data. 


Blockchain and artificial intelligence: on the wave of hype

Blockchain is an innovative digital information storage system storing data in an encrypted, distributed ledger format. In work, the data is encrypted and distributed across multiple computers, which creates tamper-proof. It is a secure database that can only be read and updated by those with permission. There are a few examples on the web today of blockchain and artificial intelligence being interconnected. Academics and scientists conducted the study. But we see the two concepts working well together. ... Today’s computers are extremely fast, but they also require a constant supply of data and instructions, without which it is impossible to process information or perform tasks. Therefore, the blockchain used on standard computers requires significant computing power because of the encryption processes. Secure data monetization could be the result of combining blockchain and artificial intelligence. Monetization of collected is a source of revenue for many companies. Among the big and famous ones are Facebook and Google resources.


How MLops deployment can be easier with open-source versioning

Among the many reasons why there are a growing number of vendors in the sector, a significant one is because building and deploying ML models is often a complicated process with many manual steps. A primary goal of MLops tools is to help automate the process of building and deploying models. While automation is important, it only solves part of the complexity. A key challenge for artificial intelligence (AI) models, that was identified in a recently released Gartner report, is that approximately only half of AI models actually end up making it into production. From Guttmann’s perspective, with application development, developers tend to have a linear way of building things. This implies that for example, new code written six months after the initial development is better than the original. That same view does not tend to work with machine learning as the process involves more research and more experimentation to determine what actually works best. “Development is always money sunk into the problem until you actually see the fruits of the effort and we want to decrease that development time to a minimum,” he said.


Robotic Process Automation Will Shape the Future of Hotel Operations

On the guest-facing side of the business, automation can be applied to virtually every touchpoint of the guest journey. Marketing automation comes in the form of upsell opportunities, re-marketing or recovery campaigns in the pre-stay, pre-check-in and post-cancellation stages. In the back of the house, automation is helping the marketing, revenue and sales departments get more done with fewer resources. Integrated CRM systems have become the heart of new, guest-centric personalization strategies such as automated email marketing programs that are proving to be huge time-savers. Revenue managers are tapping automation to stay on top of pricing and demand trends. RPA reduces the common challenges presented by running a business on a fragmented tech stack. Siloed systems often lead to a great deal of manual effort, such as copying, importing, exporting data from one system to another, or the common “swivel-chair integration.” Through RPA, operators can create workflows that fill feature gaps or replicate features from other systems, saving them time and money.


Russian hackers' lack of success against Ukraine shows that strong cyber defences work

Since the invasion, Cameron said, "what we have seen is a very significant conflict in cyberspace – probably the most sustained and intensive cyber campaign on record." But she also pointed to the lack of success of these campaigns, thanks to the efforts of Ukrainian cyber defenders and their allies. "This activity has provided us with the clearest demonstration that a strong and effective cyber defence can be mounted, even against an adversary as well prepared and resourced as the Russian Federation." Cameron argued that not only does this provide lessons for what countries and their governments can do to protect against cyberattacks, but there are also lessons for organisations on how to protect against incidents, be they nation-state backed campaigns, ransomware attacks or other malicious cyber operations. "Central to this is a commitment to long-term resilience," said Cameron. "Building resilience means we don't necessarily need to know where or how the threat will manifest itself next. Instead, we know that most threats will be unable to breach our defences. And when they do, we can recover quickly and fully."


The Unlikely Journey of GraphQL

GraphQL is drawing the spotlight because refactoring or modernization of applications into microservices is stressing REST to its limits. As information consumers, we expect more from the digital platforms that we use. Shop for a product, and we also will want to find reviews, competing offers, autofill keyword search, and likely other options. Monolithic apps crack under the load, and for similar reasons, the same fate could be happening to REST, which requires pinpoint commands to specific endpoints. And with complex queries, lots of pinpoint requests. Facebook developers created GraphQL as a client specification for alleviating the bottlenecks that were increasingly cropping up when fetching data from polyglot sources to a variety of web and mobile clients. With REST, developers had to know all the endpoints. By contrast, with GraphQL, the approach is declarative: specify what data you need rather than how to produce it. While REST is imperative, GraphQL is declarative. 


Cryptojacking, DDoS attacks increase in container-based cloud systems

The Sysdig repot also noted that there has been a jump in DDoS attacks that use containers since the start of Russian invasion of Ukraine. "The goals of disrupting IT infrastructure and utilities have led to a four‑fold increase in DDoS attacks between 4Q21 and 1Q22," according to the report. "Over 150,000 volunteers have joined anti‑Russian DDoS campaigns using container images from Docker Hub. The threat actors hit anyone they perceive as sympathizing with their opponent, and any unsecured infrastructure is targeted for leverage in scaling the attacks." Otherwise, a pro-Russian hacktivist group, called Killnet, launched several DDoS attacks on NATO countries. These include, but are not limited to, websites in Italy, Poland, Estonia, Ukraine, and the United States. “Because many sites are now hosted in the cloud, DDoS protections are more common, but they are not yet ubiquitous and can sometimes be bypassed by skilled adversaries,” Sysdig noted. “Containers pre‑loaded with DDoS software make it easy for hacktivist leaders to quickly enable their volunteers.”



Quote for the day:

"Good leaders value change, they accomplish a desired change that gets the organization and society better." -- Anyaele Sam Chiyson

Daily Tech Digest - September 28, 2022

How to Become an IT Thought Leader

Being overly tech-centric is a common mistake aspiring thought leaders make. Such individuals start with a technology, then look for problems to solve. “Instead, it's important to remember that an IT thought leader drives digital change,” Zhao says. “Understanding the technology is only one aspect of IT thought leadership.” Ross concurs. “I’ve seen several troubling examples of large technology purchases occurring before key business requirements were fully understood,” he says. “Seek first to understand the desired business outcomes and remember that technology is a potential enabler of those outcomes, but never a cure-all.” A strong business case is essential for any proposed new technology, Bethavandu says. “If your company is not ready for, say, DevOps or containerization, be self-aware and don’t push for those projects until your organization is ready,” he states. On the other hand, excessive caution can also be dangerous. “If you want to be a thought leader, you have to be bold and you cannot be afraid of failing,” Bethavandu says.


Most Attackers Need Less Than 10 Hours to Find Weaknesses

Overall, nearly three-quarters of ethical hackers think most organizations lack the necessary detection and response capabilities to stop attacks, according to the Bishop Fox-SANS survey. The data should convince organizations to not just focus on preventing attacks, but aim to quickly detect and respond to attacks as a way to limit damage, Bishop Fox's Eston says. "Everyone eventually is going to be hacked, so it comes down to incident response and how you respond to an attack, as opposed to protecting against every attack vector," he says. "It is almost impossible to stop one person from clicking on a link." In addition, companies are struggling to secure many parts of their attack surface, the report stated. Third parties, remote work, the adoption of cloud infrastructure, and the increased pace of application development all contributed significantly to expanding organizations' attack surfaces, penetration testers said. Yet the human element continues to be the most critical vulnerability, by far. 


Discover how technology helps manage the growth in digital evidence

With limited resources, even the most skilled law-enforcement personnel are hard-pressed to comb through terabytes of data that may include hours of videos, tens of thousands of images, and hundreds of thousands of words in the form of text, email, and other sources. One possible solution is to augment skilled investigators and forensic examiners with technology. Some of the key technological capabilities that can be applied to this problem are AI and machine learning. AI and machine learning models and applications create processes that read, watch, extract, index, sort, filter, translate, and transcribe information from text, images, and video. By utilizing technology to carve through and analyze data, it’s possible to reduce the data mountain to a series of small hills of related content and add tags that make it searchable. That allows people to spend their time and energy on work that is most valuable in the investigation. The good news is that help is available. Microsoft has multiple AI and machine learning processes within our Microsoft Azure Cognitive Services. 


The modern enterprise imaging and data value chain

The costs and consequences of the current fragmented state of health care data are far-reaching: operational inefficiencies and unnecessary duplication, treatment errors, and missed opportunities for basic research. Recent medical literature is filled with examples of missed opportunities—and patients put at risk because of a lack of data sharing. More than four million Medicare patients are discharged to skilled nursing facilities (SNFs) every year. Most of them are elderly patients with complex conditions, and the transition can be hazardous. ... “Weak transitional care practices between hospitals and SNFs compromise quality and safety outcomes for this population,” researchers noted. Even within hospitals, sharing data remains a major problem. ... Data silos and incompatible data sets remain another roadblock. In a 2019 article in the journal JCO Clinical Cancer Informatics, researchers analyzed data from the Cancer Imaging Archive (TCIA), looking specifically at nine lung and brain research data sets containing 659 data fields in order to understand what would be required to harmonize data for cross-study access.


Cloud’s key role in the emerging hybrid workforce

One key to the mistakes may be the overuse of cloud computing. Public clouds provide more scalable and accessible systems on demand, but they are not always cost-effective. I fear that much like when any technology becomes what the cool kids are using, cloud is being picked for emotional reasons and not business reasons. On-premises hardware costs have fallen a great deal during the past 10 years. Using these more traditional methods of storage and compute may be way more cost-effective than the cloud in some instances and may be just as accessible, depending on the location of the workforce. My hope is that moving to the cloud, which was accelerated by the pandemic, does not make us lose sight of making business cases for the use of any technology. Another core mistake that may bring down companies is not having security plans and technology to support the new hybrid workforce. Although few numbers have emerged, I suspect that this is going to be an issue for about 50% of companies supporting a remote workforce.


Why zero trust should be the foundation of your cybersecurity ecosystem

Recently, zero trust has developed a large following due to a surge in insider attacks and an increase in remote work – both of which challenge the effectiveness of traditional perimeter-based security approaches. A 2021 global enterprise survey found that 72% respondents had adopted zero trust or planned to in the near future. Gartner predicts that spending on zero trust solutions will more than double to $1.674 billion between now and 2025. Governments are also mandating zero trust architectures for federal organizations. These endorsements from the largest organizations have accelerated zero trust adoption across every sector. Moreover, these developments suggest that zero trust will soon become the default security approach for every organization. Zero trust enables organizations to protect their assets by reducing the chance and impact of a breach. It also reduces the average breach cost by at least $1.76 million, can prevent five cyber disasters per year, and save an average of $20.1 million in application downtime costs.


Walls between technology pros and customers are coming down at mainstream companies

Tools assisting with this engagement include "prediction, automation, smart job sites and digital twins," he says. "We have resources in each of our geographic regions where we scale new technology from project to project to ensure the 'why' is understood, provide necessary training and support, and educate teams on how that technology solution makes sense in current processes and day-to-day operations." At the same time, getting technology professionals up to speed with crucial pieces of this customer collaboration -- user experience (UX) and design thinking -- is a challenge, McFarland adds. "There is a widely recognized expectation to create seamless and positive customer experiences. That said, specific training and technological capabilities are a headwind that professionals are experiencing. While legacy employees may be fully immersed and knowledgeable about a certain program and its technical capabilities, it is more unusual to have both the technical and UX design expertise. The construction industry is working to find the right balance of technology expertise and awareness with UX and design proficiencies."


Why Is the Future of Cloud Computing Distributed Cloud?

Distributed cloud freshly redefines cloud computing. It states that a distributed cloud is a public cloud architecture that handles data processing and storage in a distributed manner. Said, a business using dispersed cloud computing can store and process its data in various data centers, some of which may be physically situated in other regions. A content delivery network (CDN), a network architecture that is geographically spread, is an example of a distributed cloud. It is made to send content (most frequently video or music) quickly and efficiently to viewers in various places, significantly lowering download speeds. Distributed clouds, however, offer advantages to more than just content producers and artists. They can be utilized in multiple business contexts, including transportation and sales. It is possible to use a distributed cloud even in particular geographical regions. For instance, a supplier of file transfer services can format video and store content on CDNs spread out globally using centralized cloud resources.


How to Become a Data Analyst – Complete Roadmap

First, understand this, the field of Data Analyst is not about computer science but about applying computational, analysis, and statistics. This field focuses on working with large datasets and the production of useful insights that helps in solving real-life problems. The whole process starts with a hypothesis that needs to be answered and then involvement in gathering new data to test those hypotheses. There are 2 major categories of Data Analyst: Tech and Non-Tech. Both of them work on different tools and Tech domain professionals are required to possess knowledge of required programming languages too (such as R or Python). The working professional should be fluent in statistics so that they can present any given amount of raw data into a well-aligned structure. ... Today, Billions of companies are generating data on daily basis and using it to make crucial business decisions. It helps in deciding their future goals and setting new milestones. We’re living in a world where Data is the new fuel and to make it useful data analysts are required in every sector. 


Software developer: A day in the life

An analytics role will require you to learn new skills continuously, look at things in new ways, and embrace new perspectives. In technology and business, things happen quickly. It is important to always keep up with what is happening in the industries in which you are involved. Never forget that at its core, technology is about problem-solving. Don’t get too attached to any coding language; just be aware that you probably won’t be able to use the language you like, do the refactor you want, or perform the update you expect all the time. The end focus is always on the client, and their needs take priority over developer preferences. Be prepared to use English every day. To keep your skills sharp, read documentation, talk to others often, and watch videos. ... Any analytics professional who is interested in elevating their career should always be attentive to new technologies and updates, become an expert in some specific language/technology, and understand the low level of programming in a variety of languages. Finally, if you enjoy logic, math, and problem-solving, consider a career in software development. The world needs your skills to solve big challenges.



Quote for the day:

"Leadership Principle: As hunger increases, excuses decrease." -- Orrin Woodward

Daily Tech Digest - September 27, 2022

In the shift to hybrid work, don’t overlook your in-person workforce

As companies think through their workforce strategies, taking a few critical steps can help. First, make sure that the in-person cohort receives the same amount of consideration as remote and hybrid workers. New ways of working clearly pose challenges in terms of productivity, but there is a real risk in senior leaders focusing most of their time and attention on remote-work issues. Second, measure employee sentiment, over time, to understand which factors are successful in boosting engagement and morale among the in-person workforce, and where the organization can improve. Third, look for ways to increase the autonomy of in-person workers. Encourage them to make suggestions about how their work can be done better, and empower them to act on those suggestions. Create some degree of flexibility in terms of scheduling. For example, enable workers to have more say in setting schedules, and allow workers to trade shifts. Fourth, invest in upskilling initiatives; they are a key driver of empowerment and engagement. 


Caught in the crossfire of cyber conflict

Cyber events are now routinely crossing thresholds that would have been viewed as increasingly risky 20 years ago. The result is that offensive cyber operations are now manageable for countries such as the US but are now catastrophic for smaller countries that are thrust into the cyber conflict space. The potential scale of this effect likely makes smaller countries ideal targets for sophisticated actors looking to demonstrate their capabilities. Iran appears to have stronger evidence on Israel’s role in the ‘Predatory Sparrow’ campaign (the two countries have been exchanging attacks for years) but opted to attack Albania’s government for harbouring the MeK—using the disruptive incident to send a message to Iran’s enemies. This incident is chilling because it shows the spread of sophisticated cyber capabilities, and the growing intent to conduct such operations. Most theories around cyber conflict have kept the US as a key player in such conflicts—‘Predatory Sparrow’ and Iran’s response have shown that this is outdated. 


How DevOps Practices will Expedite AI Adoption?

Although AI has developed and revolutionized many corporate processes, there are still obstacles to overcome because it necessitates a lot of human labor. Getting a dataset, training it, cleaning it, and making predictions appear increasingly tricky. A different problem is creating a fluid generalized training pattern or transferring a specific approach from one situation to another. Businesses could adapt their operational procedures to achieve more noticeable outcomes, such as the DevOps culture, which results in practical development, deployment, and operation pipeline. ... DevOps and IT teams must work closely to achieve this; as a result, a central repository for model artifacts is required, and ML engineers must redesign the production model. Thus, a smooth collaboration between the IT, DevOps, and data scientists teams is crucial. MLOps, or machine learning operations, is a different way of describing the confluence of people, processes, practices, and underlying technology that automate the implementation, monitoring, and management of AI/ML models in production in a scalable and thoroughly controlled manner.


India: Crucial cyberwarfare capabilities need to be upgraded

The world has seen many cases of cyber-attacks in espionage and sabotage. Many significant cyberattacks in the military and civil spaces have occurred in recent months. APT41, a Chinese state-sponsored hacking group, allegedly hacked into six US state governments between May 2021 to February 2022. Another Distributed Denial of Service (DDoS) attack in the preceding month was the cyber-attack on Israeli government websites. While the government has said this was the cyber-attack Israel has faced, investigations are yet to determine the source of the attack. Similarly, a targeted cyber-attack campaign on Russian research institutes was discovered in June 2021. The target was research institutes under the Rostec Corporation, whose primary expertise is the research and development of highly technological defence solutions. In India, researchers detected a new ransomware that made its victims donate money to the needy. However, this ransomware, called Goodwill, also acts maliciously by causing temporary or even permanent loss of company data and the possible closure of a company’s operations and finances.


The API gateway pattern versus the Direct client-to-microservice communication

In a microservices architecture, the client apps usually need to consume functionality from more than one microservice. If that consumption is performed directly, the client needs to handle multiple calls to microservice endpoints. What happens when the application evolves and new microservices are introduced or existing microservices are updated? If your application has many microservices, handling so many endpoints from the client apps can be a nightmare. Since the client app would be coupled to those internal endpoints, evolving the microservices in the future can cause high impact for the client apps. ... When you design and build large or complex microservice-based applications with multiple client apps, a good approach to consider can be an API Gateway. This pattern is a service that provides a single-entry point for certain groups of microservices. It's similar to the Facade pattern from object-oriented design, but in this case, it's part of a distributed system. The API Gateway pattern is also sometimes known as the "backend for frontend" (BFF) because you build it while thinking about the needs of the client app.


Why Choose a NoSQL Database? There Are Many Great Reasons

Speed is critical to innovation, but so is flexibility. A core principle of agile development is responding quickly to change. Often when the requirements change, the data model also needs to change. With relational databases, developers often have to formally request a “schema change” from the database administrators. This slows down or stops development. By comparison, a NoSQL document database fully supports agile development because it is schema-less and does not statically define how the data must be modeled. Instead, it defers to the applications and services, and thus to the developers as to how data should be modeled. With NoSQL, the data model is defined by the application model. Applications and services model data as objects (such as employee profile), multivalued data as arrays (roles) and related data as nested objects or arrays (for instance, manager relationship). Relational databases, however, model data as tables of rows and columns — related data as rows within different tables, multivalued data as rows within the same table.


Securing the Internet of Things

Unlike humans, who need to be able to access a potentially unbounded number of destinations (websites), the endpoints that an IoT device needs to speak to are typically far more bounded. But in practice, there are often few controls in place (or available) to ensure that a device only speaks to your API backend, your storage bucket, and/or your telemetry endpoint. Our Zero Trust platform, however, has a solution for this: Cloudflare Gateway. You can create DNS, network or HTTP policies, and allow or deny traffic based not only on the source or destination, but on richer identity- and location- based controls. It seemed obvious that we could bring these same capabilities to IoT devices, and allow developers to better restrict and control what endpoints their devices talk to (so they don’t become part of a botnet). ... Security continues to be a concern: if your device needs to talk to external APIs, you have to ensure you have explicitly scoped the credentials they use to avoid them being pulled from the device and used in a way you don’t expect.


Modern Enterprise Data Architecture

In traditional architecture development, data modeling is the simple task of deriving data elements from requirements, depicting the relation between the entities through entity relationship (ER) diagrams, and defining the parameters (data types, constraints, validations) around the data elements. This means that data modeling is done as a single-step activity in a traditional architecture by defining the data definition language (DDL) scripts from requirements. ... A database acts as the brain for an IT application because it serves as the central store for data being transacted and referenced in the application. Database administrators (DBAs) handle database tuning, security activities, backup, DR activities, server/platform updates, health checks, and all other management and monitoring activities of databases. When you use a cloud platform for application and database development, the aforementioned activities are critical for better security, performance, and cost efficiency. 


Data privacy can give businesses a competitive advantage

It is a similar story of a competitive edge waiting to be revealed through compliance when it comes to protecting personal data. The fines that non-compliance brings are perhaps one of the most-reported aspects of the new regulation. Serious breaches can cost a company €20m, or 4 per cent of global annual revenue per offence, but the Information Commissioner’s Office (ICO) has been very clear it has no intention to scapegoat businesses using these powers. The GDPR is very clear that data has to be held and processed securely and though the law does not outline how, Article 32 provides a clear prescription for what is expected. The ICO’s advice is that processing the minimum amount of personally identifiable information possible is a good start. Then, storing it securely and in an encrypted form makes sense. In certain circumstances, anonymising data so it can collectively provide insight without revealing identities is another tactic many organisations are using. Securing data so it cannot be hacked is a worthy end in its own right. 


7 Metrics to Measure the Effectiveness of Your Security Operations

The main objective of a resilient security operations program should be lowering an organization's MTTD and MTTR to limit any damage done by a cyber incident to your organization. MTTD measures the amount of time it takes to discover a potential security threat. This metric helps you understand the effectiveness of your organization's security operations and your team's speed and ability to recognize a threat. Therefore, the goal is to keep this metric as low as possible in order to reduce the impact of a compromise on your organization. Meanwhile, MTTR helps you measure the time it takes to respond to a threat once it is detected. A higher response time indicates that a compromise could lead to a damaging data breach. The goal is to speed up your response and decrease your risk, just like MTTD. Both MTTD and MTTR are key metrics to measure and improve your team's capabilities since it is crucial to track the effectiveness of your team as your organization's maturity grows. Like any fundamental business operation, to mature your organization you should measure operational effectiveness to determine whether your organization is reaching its KPIs and SLAs.



Quote for the day:

"Leadership is the art of giving people a platform for spreading ideas that work" -- Seth Godin

Daily Tech Digest - September 26, 2022

What is the role of the data manager?

The data manager role is not just about being “good with data”. It involves a combination of technical and interpersonal skills, says Andy Bell, vice president global data product management at data integrity specialist Precisely. As well as technical skills, he says data managers need to have “a thorough understanding about the application of technology”. In addition, they need to understand “how data is moved, managed and processed across organisations, what capabilities it does and doesn’t provide, and how data science teams can use information in the best way possible”. At the same time, data managers must be good critical thinkers, according to Bell. “They need to keep up to date with wider technology industry trends, as well as how legislation and data privacy regulations impact tools – which may need to be adapted to ensure they are compliant.” Good communication skills are essential for data managers because the role requires explaining complex concepts in a simple way. “Increasingly, data managers are involved in influencing the company in how they should be using and managing data, which involves great communication skills as well as commercial awareness,” Bell adds.


3 ways to gauge your company’s preparedness to recover from data loss

Where you store your data backup is nearly as important as creating copies in the first place. Storing your data in the cloud does not mean it is secure. Cloud services follow the cloud shared responsibility model, where the service holds and maintains your data, but your IT staff is primarily responsible for protecting it. ... Just because your data is backed up does not mean it can be recovered — without a restoration strategy, you may still lose data. Companies need a step-by-step plan to salvage their data if it is compromised. If you decide to pay an attacker, you cannot count on a clean exchange. ... Write down your recovery plan step by step, including who is responsible for each task. Run through regular simulation tests with teams and stakeholders involved in the process to ensure it works. And much like a football coach reworks plays based on changing conditions, you must make adjustments as business and technological circumstances evolve. Set a schedule to periodically review and update the strategy.


Data Management in Complex Systems

Concerns such as data privacy and provenance are far more important, like being able to audit and analyze who accesses a particular data item and why it can be a hard requirement in many fields. The notion of one bucket in which all the information in the organization resides is no longer viable. Another important sea change was common architectural patterns. Instead of a single monolithic system to manage everything in the organization, we now break apart our systems into much smaller components. Those components have different needs and requirements, are released on different schedules, and use different technologies. The sheer overhead of trying to coordinate between all of those teams when you need to make a change is a big barrier when you want to make changes in your system. The cost of coordination across so many teams and components is simply too high. Instead of using a single shared database, the concept of independent application databases is commonly used. This is an important piece of a larger architectural concept.


Google wants to help Singapore firms tap data, AI responsibly

With organisations worldwide digitally transforming their business, including those in Singapore and Malaysia, the US cloud vendor is keen to figure out how its technology and infrastructure can facilitate their efforts. Data, specifically, will prove critical in enabling companies to tap new opportunities in a digital economy, said Google Cloud's Singapore and Malaysia country director, Sherie Ng, in an interview with ZDNET. She said businesses would need to figure out how to leverage data to better understand and serve customers as well as to reduce inefficiencies and improve work processes. The ability to generate insights from the right data also would be essential for companies to not only birth new businesses and products and services, but also identify ways to measure and reduce their energy consumption and costs, Ng said. This meant building digital infrastructures that were global in scale and able to support real-time access to data, she noted. She added that organisations in some markets such as Singapore now were looking to gain more value from their cloud adoption as they moved up the model.


Google Engineer Outlines What’s Next for Angular

“Now we enter into phase three, which is the fruits of our hard work,” Twersky said. “This phase has yet to happen, frankly. Version 15 is scheduled for November, so this is very speculative and early preview. But the idea here is […] everything that we unlocked.” Version 15 will see support for full standalones and support for that, she said. “We have something that will benefit everyone, which is zone JS-enabled async stack taking by default, but we’re just calling it better stack traces,” she said. This is through another collaboration with Chrome and will make it easier to pare down to what’s relevant even when using open source code a developer didn’t write and where errors occur. Version 15 also promises to make the router tree shakable, which basically removes unused code from the code base. In writing a standalone version of the router, the team was able to integrate a lot of things about the router module that are no longer needed, making it more tree shakable, she said. The new config API allows developers to tree shake major pieces of the router API, she said.


Which cloud is for you?

Google Cloud, Lakshmanan goes on, “is about ease of use—a few robust products that integrate robustly for the most popular needs across all scales.” This is great so long as you stick with Google’s opinionated approach. If not, be warned. “If you are building something offbeat, it will be frustrating,” says engineer Clint Byrum: “GCP is neat and orderly, pretty much one way to solve any problem, which means it is great for 90% of problems and pretty frustrating for the 10%.” For all these reasons and despite those issues, Lakshmanan concludes, “Software developers [and] data scientists love it.” ... Ant Stanley, who has used all three cloud providers in his consulting practice, finds much to like about each but hints that Azure is perhaps the one that adheres most doggedly to its Windows past. This can be a criticism, but it’s also a source of strength. Microsoft has spent decades making IT folks very happy. If Azure is a way of continuing that trend, it’s hard to suggest this is bad strategy or bad technology. Matt Gillard, who also consults using the different clouds, notes that Azure is very focused on enterprises and government, both of which run lots of Windows.


Data Management Models for the Cloud

When the organization knows what or who is driving cloud cost, it can collaborate with those consumers on usage, optimization, and governance policies that ensure business value is being derived from cloud workloads. Karl Martin, chief technology officer at integrate.ai, adds a key step is to understand and plan for the intended ways in which value is to be extracted from the data assets before implementing a new data management scheme. “Historically, investments into general-purpose data management tooling, such as data lakes, were made without a full understanding of how value was to be extracted,” he says. The strategy often assumed that it would be “figured out later”, which has produced disappointing results for organizations where there is a struggle to map a potential wealth of data assets to business problems. “In some cases, the data management systems do not contemplate the demands of modern machine learning systems that would be a center of creative experimentation for data scientists and owners of lines of business,” Martin explains.


Retaining IT talent: 5 tips for better training opportunities

While it’s good to foster growth opportunities, the real benefit comes when you enable employees to have ownership over these practices. Consider a community of practice (CoP); this is an excellent avenue for people to interact, learn, collaborate, and even devise organizational improvements. Traditionally, employees need executive sponsorship to form a CoP, but try to stay out of the process until the finish line. Empower staff to create CoP proposals independently and then sponsor when you can. Employees will likely create communities around various role-relevant topics like test automation, continuous delivery, and DevOps. However, even something as simple as a book club can be a great source of skill development. Allowing staff members to spearhead clubs, initiatives, or training opportunities relevant to them means that they’ll feel more ownership over the learning process – and that those lessons are more likely to stick. It’s counterproductive to give staff the option of creating a CoP without giving them the resources they need for the group to succeed.


Data privacy audit checklist – how to compile one

When conducting a privacy audit, it’s important to identify the data you have, where it is stored and what you use it for. “Once you know what data you have, you need to establish where you got it from,” says Nigel Jones, co-founder, Privacy Compliance Hub. “Then you can work out what rights you have in relation to it; what you do with it; where you keep it; how long you keep it; and what happens when you no longer need it.” This basic inventory will form the basis of the rest of your audit as well as your Record of Processing Activities (ROPA), he says. But there is no point keeping data safe within your own organisation if you then share it with others who do not respect it, Jones points out. “Make sure you have a list of all organisations you share information with; have agreements in place with all of them; and be ready to demonstrate why you think they are safe to process data.” GDPR compliance requires that data is only used for the purpose it was collected for, so you’ll need to prove your business has committed to this principle, says Jamie Akhtar, CEO and co-founder of CyberSmart.


DevOps at Schneider: a Meaningful Journey of Engaging People into Change

Bottom line - telling your story, no matter how bad it may look or sound, but really pulling back the covers and putting the raw data out there can be uncomfortable, but is absolutely necessary to ignite your case for change. Spend time making your case for change less formal and more meaningful and something people can easily relate to. The best way to do this is by scrapping all those stiff templates, and crafting your case for change like you are writing a story and marketing it like you are making the sale of your life! Some fun ways to do this include using short motion graphic animations vs. boring emails or one-page PowerPoints, scheduling informal town hall meetings to collect feedback and get input on what you are trying to do (or sell), and anonymous surveys for those who are uncomfortable providing feedback in a more formal way. The options are endless, but think outside the box and make it fun. ... DevOps and Scope Creep are synonymous - Always come back to the "why" of your DevOps transformation and use your goals and objectives as your true north to validate your progress as you get started.



Quote for the day:

"No great manager or leader ever fell from heaven, its learned not inherited." -- Tom Northup

Daily Tech Digest - September 24, 2022

Tackling Developer Onboarding Complexity

A common thread in onboarding, and more broadly on reducing developer cognitive load, is the concept of “golden paths” or “paved paths.” Ultimately, the idea is to reduce complexity and get to the bare bones of what needs to be learned or done to increase developer velocity and safety. Mostly, once the cultural aspects of onboarding are covered, this comes back to the “golden path” platform created for developers, which includes the tools and processes that are proven to work but aren’t handcuffs. Once a developer knows how to walk, for example, platforms should be flexible enough to let them run. Humanitec’s CEO, Kaspar von Grünberg, said, “Perhaps more important than fancy golden paths is to agree on the lowest common tech denominator to empower developers to work faster. Why run ultra-complex things if there is an alternative? It is like taking a tractor to do your grocery shopping, which is not productive. If you scatter things all over the place, you are not getting the effects of scale, and the tools you bring in are not delivering ROI. This is why I advocate for the value of standardization. Standardization forms the lowest common tech denominators, clearing the way for individual freedom where needed.”


How devops in the cloud breaks down

First is the obvious issue: talent. To do devops in the cloud, you need devops engineers who understand how to build and use toolchains. More important, you need engineers who know how to build toolchains using cloud-based tools. Some (but not many) people out there have these skills. I see many companies fail to find them and even pull back devops to traditional platforms just so they can staff up. Sadly, that’s not a bad strategy right now. Second, the cloud rarely has all the tools you’ll need for most devops toolchains. Although we have a tremendous number of devops tools, either sold by the public cloud providers or by key partners that sell devops cloud services, about 10% to 20% of the tools you’ll need don’t exist on your public cloud platform. You will have to incorporate another provider’s platform, which then leads to multicloud complexity. Of course, the need for those absent tools depends on the type of application you’re building. This shortage is not as much of a problem as it once was because devops tool providers saw the cloud computing writing on the wall and quickly filled in the tool shortages. 


Tesla is set to introduce its prime 'Optimus' robot

"Autopilot/AI team is also working on Optimus and (actually smart) summon/autopark, which have end of month deadlines," Musk wrote while responding to a Tesla fan club account on Twitter. Musk's Texas-based company is reportedly considering ambitious plans to use thousands of humanoid robots within its factories before eventually extending to millions globally, per a job posting. According to Musk, who is now promoting a vision for the company that extends far beyond producing self-driving electric cars, the robot industry may eventually be worth more than Tesla's automobile income. A source familiar with the situation claimed that as Tesla holds more internal discussions on robotics, the buzz is growing within the organization. ... For Tesla to be successful, it will have to display robots performing various spontaneous acts. Such evidence might help Tesla stock, which is currently down 25 percent from its 2021 peak, according to Nancy Cooke, a professor of human systems engineering at Arizona State University.


Researchers Say It'll Be Impossible to Control a Super-Intelligent AI

Rules such as 'cause no harm to humans' can't be set if we don't understand the kind of scenarios that an AI is going to come up with, suggest the authors of the new paper. Once a computer system is working on a level above the scope of our programmers, we can no longer set limits. "A super-intelligence poses a fundamentally different problem than those typically studied under the banner of 'robot ethics'," wrote the researchers. "This is because a superintelligence is multi-faceted, and therefore potentially capable of mobilizing a diversity of resources in order to achieve objectives that are potentially incomprehensible to humans, let alone controllable." Part of the team's reasoning came from the halting problem put forward by Alan Turing in 1936. The problem centers on knowing whether or not a computer program will reach a conclusion and answer (so it halts), or simply loop forever trying to find one. As Turing proved through some smart math, while we can know that for some specific programs, it's logically impossible to find a way that will allow us to know that for every potential program that could ever be written.


The Mutating Cyber Threat

Although each best practice is important, having a programmatic approach is essential for success, Kaun said. “Too many organizations look at security as a list of individual tasks such as perimeter protection and patching, but in reality they all have to work together.” As best practices mature and become part of corporate culture, and as people become educated and equipped to apply those best practices, true change and improved security begins to evolve. “A common adage in security is ‘people, processes, and technology,’ Cusimano noted. “Two of those involve people because people have to adhere to the processes.” The human element is the ultimate toolset, including awareness, collaboration, support, and maintenance. “A proper security program is properly educated and equipped people applying best practice policy and procedures, aided by technology,” Kaun said. “While the right technology will accelerate the effort, if you do not have the global view, the appropriate people, and contextual data to act upon, you will struggle.” Establishing that culture is critical but won’t happen overnight, Cusimano said. He recalled the transition to a safety-first culture in many manufacturing plants.


MIT and Databricks Report Finds Data Management Key to Scaling AI

“Data issues are more likely than not to be the reason if companies fail to achieve their AI goals, according to more than two-thirds of the technology executives we surveyed,” says Francesca Fanshawe, editorial director for MIT Technology Review and editor of the report. “Improving processing speeds, governance, and quality of data, as well as its sufficiency for models, are the main data imperatives to ensure AI can be scaled.” Data security is also a priority with leaders revealing they plan to increase spending on security improvement by an average of 101% over the next three years. The leader group also plans to invest 85% more in the same period on data governance, 69% more on new data and AI platforms, and 63% more on existing platforms. The report lists a few attributes of successful data and AI technology foundations, including a democratization of data to involve a greater number of data literate employees who can configure and improve AI algorithms. Openness is another attribute, with open standards and data formats allowing organizations to source data, insights, and tools externally to facilitate collaboration


Responsible AI, Blockchain in Safe and Ethical AI

Artificial Intelligence (AI) is a broad field that includes machine learning and cognitive computing where computers are programmed to mimic cognitive functions such as learning and problem solving many times faster and more accurately than a human. AI or its subset Computational intelligence, when combined with blockchain systems, can create more robust cryptographic functionality and ciphers thereby making it more difficult for cyber hackers to compromise systems. When blockchain participants have increased control over their data, they have the potential to decide with which parties and for what purposes their data are shared. To collect participant data for use in an AI dataset, participant permissions will need to be obtained.  ... The decentralized characteristics of smart blockchains can effectively help smart grids realize the transformation from centralization to distribution. The decentralization of smart blockchain breaks information barriers and realizes secure data sharing among multiple participants.


Worried about quiet quitting? These Dos and Don'ts could stop it becoming a problem

To understand the risk of quiet quitting in current employees, keep in touch with former employees and find out what made them leave the company. Their insight can help you improve culture for current employees and reduce further resignations. Deal suggests conducting thorough exit interviews with employees who leave the company and reaching out six months later to assess their experience at their new job if they have one. This six-month communication opportunity can be the route back to the former workplace for some employees. If an employee expresses dissatisfaction at their new job and an interest in returning to your company, see what you can do for them. Employees who left your company on good terms, and later want to return to their old jobs, are called boomerang employees, and they can be very beneficial to your company. ... But beware: some employees may hesitate to ask for their old jobs back. They might fear a response from former colleagues who were unhappy at their departure, or they might be concerned about an employee they didn't like who is still in the business. But if you're lucky, this is an opportunity to have excellent talent return to your company.


DevOps Is Dead. Embrace Platform Engineering

Developers don’t want to do operations anymore, and that’s a bad sign for DevOps, at least according to this article by Scott Carey and this Twitter thread by Sid Palas. ... When developers in teams don’t agree on the extent to which they should, or can, do operations tasks, forcing everyone to do DevOps in a one-size-fits-all way has disastrous consequences. The primary consequence is the increasing cognitive load put on developers. This has forced many teams to reconsider how they balance the freedom that comes from developer self-service with mitigating cognitive load through abstraction. Both are necessary: Self-service capabilities are essential to moving quickly and efficiently. ... Platform engineering uses a product approach to enable the right amount of developer self-service and find the right level of abstraction for individual organizations and teams. Successful platform teams combine user research, regular feedback and marketing best practices to understand their developers, create a platform that solves common problems and get internal buy-in from key stakeholders.


SEO poisoning campaign directs search engine visitors from multiple industries to JS malware

Deepwatch came across the campaign while investigating an incident at a customer where one of the employees searched for “transition services agreement” on Google and ended up on a website that presented them with what appeared to be a forum thread where one of the users shared a link to a zip archive. The zip archive contained a file called "Accounting for transition services agreement" with a .js (JavaScript) extension that was a variant of Gootloader, a malware downloader known in the past to deliver a remote access Trojan called Gootkit but also various other malware payloads. Transition services agreements (TSAs) are commonly used during mergers and acquisitions to facilitate the transition of a part of an organization following a sale. Since they are frequently used, many resources are likely available for them. The fact that the user saw and clicked on this link suggests it was displayed high in ranking. When looking at the site hosting the malware delivery page, the researchers realized it was a sports streaming distribution site that based on its content was likely legitimate. 



Quote for the day:

"Open Leadership: the act of engaging others to influence and execute a coordinated and harmonious conclusion." -- Dan Pontefract