Daily Tech Digest - May 01, 2021

Is Open Source More Secure Than Closed Source?

Open source software offers greater transparency to the teams that use it; visibility into both the code itself and how it is maintained. Giving organizations access to the source code allows them the opportunity to evaluate the security of the code for themselves. Additionally, users have more visibility into how and what changes are made to the code base, including the pre-release review process, how often dependencies are updated and how developers and organizations respond to security vulnerabilities. As a result, open source software users have a more complete picture of the overall security of the software they’re using. Another major benefit is found in the communities which drive the growth and development of open source software. The vast majority of open source software is backed by communities of forward-thinking developers, many of whom use the same software they build and maintain as a primary means of communicating with team members. Open source developers and the communities around the software value users’ input to a significant degree, and many user suggestions end up getting incorporated into new versions.


Let’s Not Regulate A.I. Out of Existence

A.I. is being used to analyze vast amounts of space data and is having an enormous impact on health care. A.I. image and scan analysis are, for example, helping doctors identify breast and colon cancer. It’s also showing potential in vaccine creation. I guarantee that A.I. will someday save lives. It’s those kinds of A.I.-driven data analysis that gets shoved aside by news of an A.I. beating a world-champion GO player or the world’s best-known entrepreneur raising alarms about a situation where “A.I. is vastly smarter than humans.” That kind of fear-mongering leads consumers, who don’t understand the differences between A.I. that scans a crowd of 10,000 faces for one suspect and one that can create recipes based on pleasing ingredient combinations, to mistrust all A.I., and to write the kind of stifling regulation produced by the EU. Even if you still think the negatives outweigh the benefits, we’ll arguably need better and bigger A.I. to manage and sift through the mountains of data we produce every single day. To deny A.I.’s role in this is like saying we don’t need garbage collection services and that our debris can just pile up on street corners indefinitely.


AutoNLP: Automatic Text Classification with SOTA Models

AutoNLP is a tool to automate the process of creating end-to-end NLP models. AutoNLP is a tool developed by the Hugging Face team which was launched in its beta phase in March 2021. AutoNLP aims to automate each phase that makes up the life cycle of an NLP model, from training and optimizing the model to deploying it. “AutoNLP is an automatic way to train and deploy state-of-the-art NLP models, seamlessly integrated with the Hugging Face ecosystem.” — AutoNLP team One of the great virtues of AutoNLP is that it implements state-of-the-art models for the tasks of binary classification, multi-class classification, and entity recognition, supported in 8 languages ​​which are: English, German, French, Spanish, Finnish, Swedish, Hindi, and Dutch. Likewise, AutoNLP takes care of the optimization and fine-tuning of the models. In the security and privacy part, AutoNLP implements data transfers protected under SSL, also the data is private to each user account. As we can see, AutoNLP emerges as a tool that facilitates and speeds up the process of creating NLP models. In the next section, will see how the experience was like from start to finish when creating a text classification model using AutoNLP.


5 Reasons Why Artificial Intelligence Won’t Replace Physicians

Even if the array of technologies offered brilliant solutions, it would be difficult for them to mimic empathy. Why? Because at the core of compassion, there is the process of building trust: listening to the other person, paying attention to their needs, expressing the feeling of understanding and responding in a manner that the other person knows they were understood. At present, you would not trust a robot or a smart algorithm with a life-altering decision; or even with a decision whether or not to take painkillers, for that matter. We don’t even trust machines in tasks where they are better than humans – like taking blood samples. We will need doctors holding our hands while telling us about a life-changing diagnosis, their guide through therapy and their overall support. An algorithm cannot replace that. ... More and more sophisticated digital health solutions will require qualified medical professionals’ competence, no matter whether it’s about robotics or A.I. The human brain is so complex and able to oversee such a vast scale of knowledge and data that it merely is not worth developing an A.I. that takes over this job – the human brain does it so well. It is more worthwhile to program those repetitive, data-based tasks, and leave the complex analysis/decision to the person.


Mimicking the brain: Deep learning meets vector-symbolic AI

Machines have been trying to mimic the human brain for decades. But neither the original, symbolic AI that dominated machine learning research until the late 1980s nor its younger cousin, deep learning, have been able to fully simulate the intelligence it’s capable of. One promising approach towards this more general AI is in combining neural networks with symbolic AI. In our paper “Robust High-dimensional Memory-augmented Neural Networks” published in Nature Communications, we present a new idea linked to neuro-symbolic AI, based on vector-symbolic architectures. We’ve relied on the brain’s high-dimensional circuits and the unique mathematical properties of high-dimensional spaces. Specifically, we wanted to combine the learning representations that neural networks create with the compositionality of symbol-like entities, represented by high-dimensional and distributed vectors. The idea is to guide a neural network to represent unrelated objects with dissimilar high-dimensional vectors. In the paper, we show that a deep convolutional neural network used for image classification can learn from its own mistakes to operate with the high-dimensional computing paradigm, using vector-symbolic architectures.


How to master manufacturing's data and analytics revolution

The Manufacturing Data Excellence Framework, developed by a community of companies hosted by the World Economic Forum’s Platform for Shaping the Future of Advanced Manufacturing and Production, serves this purpose. We introduced this framework, comprising 20 different dimensions with five different maturity levels, in our recent white paper, “Data Excellence: Transforming manufacturing and supply systems”. “One of the challenges we face when discussing the industry transformation towards data ecosystems is the lack of commonality of terminology. It’s very powerful to have a tool in which we have created common definitions and explanations, and around which we can build the foundations towards data sharing excellence in manufacturing,” says Niall Murphy, CEO and Co-founder of EVRYTHNG. The first step is an assessment of the status quo using the framework. Companies will be able to objectively assess their maturity in implementing applications and technological and organizational enablers. They will then be able to compare their individual maturity versus the benchmark and define their individual target state.


Ethics of AI: Benefits and risks of artificial intelligence

Ethical issues take on greater resonance when AI expands to uses that are far afield of the original academic development of algorithms. The industrialization of the technology is amplifying the everyday use of those algorithms. A report this month by Ryan Mac and colleagues at BuzzFeed found that "more than 7,000 individuals from nearly 2,000 public agencies nationwide have used technology from startup Clearview AI to search through millions of Americans' faces, looking for people, including Black Lives Matter protesters, Capitol insurrectionists, petty criminals, and their own friends and family members." Clearview neither confirmed nor denied BuzzFeed's' findings. New devices are being put into the world that rely on machine learning forms of AI in one way or another. For example, so-called autonomous trucking is coming to highways, where a "Level 4 ADAS" tractor trailer is supposed to be able to move at highway speed on certain designated routes without a human driver. A company making that technology, TuSimple, of San Diego, California, is going public on Nasdaq. In its IPO prospectus, the company says it has 5,700 reservations so far in the four months since it announced availability of its autonomous driving software for the rigs.


Dale Vince has a winning strategy for sustainability

Fundamentally, it’s more economic to do the right thing than the wrong thing. Renewable energy, for example, is a great democratizing force in world affairs because the wind and the sun are available to every country on the planet, whereas oil and gas are not. We fight wars over oil and gas quite literally because it’s such a precious resource. And here in Britain, we spend £55 billion [US$76 billion] every year buying fossil fuels from abroad to bring them here to burn them. And if we spent that money on wind and solar machines instead, we could make our own electricity, create jobs, and be independent from fluctuating global fossil fuel markets and currency exchanges. We can create a stronger, more resilient economy, as well as a cleaner one. ... I think businesses historically reinvent themselves. They move with the times or they die, and that’s a natural order of things. And some businesses just get left behind because their business model becomes outdated. A nimble, adaptive business will move from the old way of doing things and will still be here. 


A Deeper Dive into the DOL’s First-of-Its-Kind Cybersecurity Guidance

ERISA’s duty of prudence requires fiduciaries to act “with the care, skill, prudence, and diligence under the circumstances then prevailing that a prudent man acting in a like capacity and familiar with such matters would use in the conduct of an enterprise of a like character and with like aims.” It has become generally accepted that ERISA fiduciaries have some responsibility to mitigate the plan’s exposure to cybersecurity events. But, prior to this guidance, it was not clear what the DOL considered to be prudent with respect to addressing cybersecurity risks associated, including those related to identity theft and fraudulent withdrawals. Each of the three new pieces of guidance addresses a different audience. The first, Tips for Hiring a Service Provider with Strong Cybersecurity Practices (Tips for Hiring a Service Provider), provides guidance for plan fiduciaries when hiring a service provider, such as a recordkeeper, trustee, or other provider that has access to a plan’s nonpublic information. The second, Cybersecurity Program Best Practices (Cybersecurity Best Practices), is, as the name indicates, a collection of best practices for recordkeepers and other service providers, and may be viewed as a reference for plan fiduciaries when evaluating service providers’ cybersecurity practices. The third, Online Security Tips (Online Security Tips), contains online security advice for plan participants and beneficiaries. We have summarized each piece of guidance below along with our key observations.


Less complexity, more control: The role of multi-cloud networking in digital transformation

The panellists agreed that it means going back to layer by layer design principles with clean APIs up and down the protocol stack from application to the lowest levels of connectivity. Without such design rigour, programming or operator errors in a complex highly distributed system could have profound consequences. Cisco’s Pandey says that while it appeared “horribly scary” in terms of connectivity to take monolithic apps and make them cloud-native, the upside is that the resulting discrete components of the application can be swapped out or taken down with fewer consequences to the rest of the system and ultimately to customers. But, he warned, “you need to have the tools and capabilities to monitor it – the full-stack observability piece. You need to have discoverability and you need to have security at the API layer all the way down so that you can manage things properly”. His comments were echoed by Alkira’s Khan, who pointed out that the problems of a distributed architecture are particularly acute for enterprises trying to apply a security posture in a multi-cloud environment.



Quote for the day:

"It is the responsibility of leadership to provide opportunity, and the responsibility of individuals to contribute." - William Pollard

Daily Tech Digest - April 30, 2021

Tech to the aid of justice delivery

Obsolete statutes which trigger unnecessary litigation need to be eliminated as they are being done currently with over 1,500 statutes being removed in the last few years. Furthermore, for any new legislation, a sunset review clause should be made a mandatory intervention, such that after every few years, it is reviewed for its relevance in the society. A corollary to this is scaling decriminalisation of minor offences after determining as shown by Kadish SH in his seminal paper ‘The Crisis of Overcriminalization’, whether the total public and private costs of criminalisation outweigh the benefits? Non-compliance with certain legal provisions which don’t involve mala fide intent can be addressed through monetary compensation rather than prison time, which inevitably instigates litigation. Finally, among the plethora of ongoing litigations in the Indian court system, a substantial number are those that don’t require interpretation of the law by a judge, but simply adjudication on facts. These can take the route of ODR, which has the potential for dispute avoidance by promoting legal education and inducing informed choices for initiating litigation and also containment by making use of mediation, conciliation or arbitration, and resolving disputes outside the court system.


Leading future-ready organizationsTo break through these barriers to Agile, companies need a restart. 

They need to continue to expand on the initial progress they’ve made but focus on implementing a wider, more holistic approach to Agile. Every aspect of the organization must be engaged in an ongoing cyclical process of “discover and evaluate, prioritize, build and operate, analyze…and repeat.” ... Organizations that leverage digital decoupling are able to get on independent release cycles and unlock new ways of working with legacy systems. Based on our work with clients, we’ve seen that this can result in up to 30% reduction in cost of change, reduced coordination overhead, and increased speed of planning and pace of delivery. ... In our work with clients, we see firsthand how cross-functional teams and automation of application delivery and operations contributes to increased pace of delivery, improved employee productivity, and up to 30% reduction in deployment time. Additionally, scaling DevOps enables fast and reliable releases of new features to production within short iterations and includes optimizing processes and upskilling people, which is the starting point for a collaborative and liquid enterprise. .... Moving talent and partners into a non-hierarchal and blended talent sourcing and management model can result in 10-20% increase in capacity.


F5 Big-IP Vulnerable to Security-Bypass Bug

The vulnerability specifically exists in one of the core software components of the appliance: The Access Policy Manager (APM). It manages and enforces access policies, i.e., making sure all users are authenticated and authorized to use a given application. Silverfort researchers noted that APM is sometimes used to protect access to the Big-IP admin console too. APM implements Kerberos as an authentication protocol for authentication required by an APM policy, they explained. “When a user accesses an application through Big-IP, they may be presented with a captive portal and required to enter a username and password,” researchers said, in a blog posting issued on Thursday. “The username and password are verified against Active Directory with the Kerberos protocol to ensure the user is who they claim they are.” During this process, the user essentially authenticates to the server, which in turn authenticates to the client. To work properly, KDC must also authenticate to the server. KDC is a network service that supplies session tickets and temporary session keys to users and computers within an Active Directory domain.


4 Business Benefits of an Event-Driven Architecture (EDA)

Using an event-driven architecture can significantly improve developmental efficiency in terms of both speed and cost. This is because all events are passed through a central event bus, which new services can easily connect with. Not only can services listen for specific events, triggering new code where appropriate, but they can also push events of their own to the event bus, indirectly connecting to existing services. ... If you want to increase the retention and lifetime value of customers, improving your application’s user experience is a must. An event-driven architecture can be incredibly beneficial to user experience (albeit indirectly) since it encourages you to think about and build around… events! ... Using an event-driven architecture can also reduce the running costs of your application. Since events are pushed to services as they happen, there’s no need for services to poll each other for state changes continuously. This leads to significantly fewer calls being made, which reduces bandwidth consumption and CPU usage, ultimately translating to lower operating costs. Additionally, those using a third-party API gateway or proxy will pay less if they are billed per-call.


Gartner says low-code, RPA, and AI driving growth in ‘hyperautomation’

Gartner said process-agnostic tools such as RPA, LCAP, and AI will drive the hyperautomation trend because organizations can use them across multiple use cases. Even though they constitute a small part of the overall market, their impact will be significant, with Gartner projecting 54% growth in these process-agnostic tools. Through 2024, the drive toward hyperautomation will lead organizations to adopt at least three out of the 20 process-agonistic types of software that enable hyperautomation, Gartner said. The demand for low-code tools is already high as skills-strapped IT organizations look for ways to move simple development projects over to business users. Last year, Gartner forecast that three-quarters of large enterprises would use at least four low-code development tools by 2024 and that low-code would make up more than 65% of application development activity. Software automating specific tasks, such as enterprise resource planning (ERP), supply chain management, and customer relationship management (CRM), will also contribute to the market’s growth, Gartner said.


When cryptography attacks – how TLS helps malware hide in plain sight

Lots of things that we rely on, and that are generally regarded as bringing value, convenience and benefit to our lives…can be used for harm as well as good. Even the proverbial double-edged sword, which theoretically gave ancient warriors twice as much fighting power by having twice as much attack surface, turned out to be, well, a double-edged sword. With no “safe edge” at the rear, a double-edged sword that was mishandled, or driven back by an assailant’s counter-attack, became a direct threat to the person wielding it instead of to their opponent. ... The crooks have fallen in love with TLS as well. By using TLS to conceal their malware machinations inside an encrypted layer, cybercriminals can make it harder for us to figure out what they’re up to. That’s because one stream of encrypted data looks much the same as any other. Given a file that contains properly-encrypted data, you have no way of telling whether the original input was the complete text of the Holy Bible, or the compiled code of the world’s most dangerous ransomware. After they’re encrypted, you simply can’t tell them apart – indeed, a well-designed encryption algorithm should convert any input plaintext into an output ciphertext that is indistinguishable from the sort of data you get by repeatedly rolling a die.


Decoupling Software-Hardware Dependency In Deep Learning

Working with distributed systems, data processing such as Apache Spark, Distributed TensorFlow or TensorFlowOnSpark, adds complexity. The cost of associated hardware and software go up too. Traditional software engineering typically assumes that hardware is at best a non-issue and at worst a static entity. In the context of machine learning, hardware performance directly translates to reduced training time. So, there is a great incentive for the software to follow the hardware development in lockstep. Deep learning often scales directly with model size and data amount. As training times can be very long, there is a powerful motivation to maximise performance using the latest software and hardware. Changing the hardware and software may cause issues in maintaining reproducible results and run up significant engineering costs while keeping software and hardware up to date. Building production-ready systems with deep learning components pose many challenges, especially if the company does not have a large research group and a highly developed supporting infrastructure. However, recently, a new breed of startups have surfaced to address the software-hardware disconnect.


4 tips for launching a successful data strategy

Your business partners know that data can be powerful, and they know that they want it, but they do not always know, specifically, what data they need and how to use it. The IT organization knows how to collect, structure, secure, and serve up the data, but they are not typically responsible for defining how best to leverage the data. This gap between serving up the data and using the data can be as wide as the Ancient Mariner’s ocean (sorry), over which the CIO needs to build a bridge. ... But how do we attract those brilliant data scientists who can build the data dashboard straw man? To counter the challenge of a really tight market for these rare birds, Nick Daffan, CIO of Verisk Analytics, suggests giving data scientists what we all want: interesting work that creates an impact. “Data scientists want to get their hands on data that has both depth and breadth, and they want to work with the most advanced tools and methods," Daffan says. "They also want to see their models implemented, which means being able to help their business partners and customers use the data in a productive way.”


How to boost internal cyber security training

A big part of maintaining engagement among staff when it comes to cyber security is explaining how the consequences of insufficient protection could affect employees in particular. “Unless individuals feel personally invested, they tend not to concern themselves with the impact of a breach,” said James Spiteri, principal security specialist at Elastic. “Provide training that moves beyond theory and shows the risks and implications through actual practice to help engage the individual. For example, simulating an attack to show how an insecure password or bad security hygiene on personal accounts can lead to unwanted access of people’s personal information such as photos or payment details could be very effective in changing behaviours. “Teams need to find relatable tools to help break down the complexities of cyber security. Showcasing cyber security problems through relatable items like phones, and everyday situations such as connecting to public Wi-fi, can help spread awareness of employees’ digital footprint and how easy it is to spread information without being aware of it.”


Shedding light on the threat posed by shadow admins

Threat actors seek shadow admin accounts because of their privilege and the stealthiness they can bestow upon attackers. These accounts are not part of a group of privileged users, meaning their activities can go unnoticed. If an account is part of an Active Directory (AD) group, AD admins can monitor it, and unusual behaviour is therefore relatively straightforward to pinpoint. However, shadow admins are not members of a group since they gain a particular privilege by a direct assignment. If a threat actor seizes control of one of these accounts, they immediately have a degree of privileged access. This access allows them to advance their attack subtly and craftily seek further privileges and permissions while escaping defender scrutiny. Leaving shadow admin accounts on an organization’s AD is a considerable risk that’s best compared to handing over the keys to one’s kingdom to do a particular task and then forgetting to track who has the keys and when to ask for it back. It pays to know who exactly has privileged access, which is where AD admin groups help. Conversely, the presence of shadow admin accounts could be a sign that an attack is underway.



Quote for the day:

"Leaders are more powerful role models when they learn than when they teach." -- Rosabeth Moss Kantor

Daily Tech Digest - April 29, 2021

Why the Age of IIoT Demands a New Security Paradigm

Perhaps the most dangerous and potentially prolific security threats are employees, experts contend. “We fear Russia in terms of cybersecurity breaches, but the good-hearted employee is the most dangerous,” says Greg Baker, vice president and general manager for the Cyber Digital Transformation organization at Optiv, a security systems integrator. “The employee that tries to stretch their responsibilities by updating a Windows XP workstation to Windows 10 and shuts the factory down—they’re the most dangerous threat actor.” Historically, security of OT environments has been addressed by preventing connectivity to outside sources or walling off as much as possible from the internet using a strategy many refer to as an “air gap.” With the latter approach, firewalls are the focal point of the security architecture, locking down an automation environment, perhaps in a specific building, to prevent external access as opposed to a strategy predicated on securing individual endpoints on the industrial network such as HMIs or PLCs. “We used to live in a world that was protected—you didn’t need to put a lock on your jewelry drawer because you had a huge fence around the property and no one was getting in,” explains John Livingston


9 unexpected skills you need for today's tech team

Pekelman said that being adaptable is also crucial. "More than ever, teams need to be agile and flexible—as we've learned, things can truly change in a very short period of time," he said. Nathalie Carruthers, executive vice president and chief HR officer at Blue Yonder, agreed that change, innovation and transformation are the only constants in the tech world. "We look for candidates who can adapt to this constant change and who have a passion for learning," she said. In addition to working well with others, IT professionals have to be able to set priorities for their daily and weekly to-do lists without extensive guidance from the boss. Jon Knisley, principal of automation and process excellence at FortressIQ, said employees also should be able to think critically and act. "With more agile and collaborative work styles, employees need to execute with less guidance from management," he said. "The ability to conduct objective analysis and evaluate an issue in order to form a judgement is paramount in today's environment." Carruthers said technical skills and prior experience are good, but transferable skills are ideal. "Transferable skills showcase problem-solving ability, versatility and adaptability—common traits in successful leaders and essential elements for career development," she said.


4 Innovative Ways Cyberattackers Hunt for Security Bugs

A more time-consuming and less satisfying tactic to find bugs is fuzzing. I was once tasked with breaking into a company, so I started at a relatively simple place — its employee login page. I began blindly prodding, entering ‘a’ as the username, and getting my access denied. I typed two a’s… access denied again. Then I tried typing 1000 a’s, and the portal stopped talking to me. A minute later, the system came back online and I immediately tried again. As soon as the login portal went offline, I knew I found a bug. Fuzzing may seem like an easy path to finding every exploit on a network, but for attackers, it’s a tactic that rarely works on its own. And if an attacker fuzzes against a live system, they’ll almost certainly tip off a system admin. I prefer what I call spear-fuzzing: Supplementing the process with a human research element. Using real-world knowledge to narrow the attack surface and identify where to dig saves a good deal of time. Defenders are constantly focused on making intrusion more difficult for attackers, but hackers simply don’t think like defenders. Hackers are bound to the personal cost of time and effort, but not to corporate policy or tooling.


7 Things Great Leaders Do Every Day

A leader needs to inspire takeaways, which will bring value to-and-for the team. Consistency in success relies on having all able hands on deck, working together and with mutual understanding, to make for the steadiest ship. If you're trying to build better structure within mid-sized or larger organizations, the Leader should consider delegating the sharing of information amongst department/division heads and allow for them to disseminate the state of things to their reports. Choosing one-on-ones, senior staff huddles, and/or both (depending on what needs to be accomplished) are good ways to ensure this process smoothly moves forward. These should not substitute for any regularly scheduled staff meetings, which should be conducted at the frequency and manner that most makes sense for your organizational environment, sector, and company size. In turn, communicating the state of things to your department/division heads will task and empower them to take progressive roles in having ownership of communications relevant to their department/division while being “in the know” on the overall macro level.


Rearchitecting for MicroServices: Featuring Windows & Linux Containers

First, let’s recap the definition of what a container is – a container is not a real thing. It’s not. It’s an application delivery mechanism with process isolation. In fact, in other videos I have made on YouTube, I compare how a container is similar to a waffle, or even a glass of whiskey. If you’re new to containers, I highly recommend checking out my “Getting Started with Docker” video series available here. Second, let’s simplify what a Dockerfile actually is – the TL;DR is it’s an instruction manual for the steps you need to either simply run, or build and run your application. That’s it. At its most basic level, it’s just a set of instructions for your app to run, which can include the ports it needs, the environment variables it can consume, the build arguments you can pass, and the working directories you will need. Now, since a container’s sole goal is to deliver your application with only the processes your application needs to run, we can take that information and begin to think about our existing application architecture. In the case of Mercury Health, and many similar customers who are planning their migration path from on-prem to the cloud, we have a legacy application that is not currently architected for cross platform support – I.E. it only runs on Windows.


How to Change Gender Disparity Among Data Science Roles

There are times that I see job reqs and I’ll see recruiters come back saying they’re not finding that type of candidate -- that it doesn’t exist. I’m pretty convinced that the way the job requisitions are written they are inherently attracting individuals that may feel more confident. There’s a ton of data around the idea that individuals that identify as female are far less likely to apply to a role if they don’t tick every single box whereas their male counterparts, if they check a third or less, will be bold and apply. I think we need to do a better job at writing job descriptions that are inclusive. If there’s roles that you foresee your organization is going to need filled in AI, robotics, or edge computing -- some of the things that are tip of the spear -- the whole market is stripped out irrespective of what gender or background you may have. That is a leading indicator that an investment needs to be made. Whether that’s investing in junior practitioners, or creating alliances and relationships with local colleges and universities, or being more creative about how you curate your class of interns so they have time to ramp up, you’ve got to handle both sides of it.


Cyber attackers rarely get caught – businesses must be resilient

Hackers are increasingly targeting SMBs as, to them, it’s easy money: the smaller the business is, the less likely it is to have adequate cyber defences. Even larger SMBs typically don’t have the budgets or resources for dedicated security teams or state-of-the-art threat prevention or protection. Ransomware, for instance, is one of the biggest threats companies are facing today. While we saw the volume of ransomware attacks decline last year, this was only because ransomware has become more targeted, better implemented, and much more ruthless, with criminals specifically targeting higher value and weaker targets. One of the most interesting – and concerning – findings from our report, “The Hidden Cost of Malware”, was that the businesses had become preferred targets because they can and will pay more to get their data back. About of quarter of companies in our survey were asked to pay between $11,000 and $50,000, and almost 35% were asked to pay between $51,000 and $100,000. In fact, ransomware has become so lucrative and popular that it’s now available as a “starter kit” on the dark web. This now means that novice cyber criminals can build automated campaigns to target businesses of any size.


How to Secure Employees' Home Wi-Fi Networks

A major security risk associated with remote work is wardriving: stealing Wi-Fi credentials from unsecured networks while driving past people's homes and offices. Once the hacker steals the Wi-Fi password, they move on to spoofing the network's Address Resolution Protocol (ARP). Next, the network's traffic is sent to the hacker, and that person is fully equipped to access corporate data and wreak havoc. A typical home-office router is set up with WPA2-PSK (Wi-Fi Protected Access 2 Pre-Shared Key), a type of network protected with a single password shared between all users and devices. Unfortunately, WPA2-PSK is by far the most common authentication mechanism used in homes, which puts employees at risk for over-the-air credential theft. WPA2-PSK does have a saving grace, which is that the passwords must be decrypted once stolen. Password encryption can prevent hackers from stealing passwords once they have them, but only if they are unique, complex, and of adequate length. Avast conducted a study of 2,000 households that found 79% of homes employed weak Wi-Fi passwords. 


Solve evolving enterprise issues with GRC technology

The key challenges organizations face in fulfilling regulator requests is keeping business data up to date. Organizations of all sizes are working to reduce the delay between distributing a risk assessment, receiving responses, understanding their risk insights, and making risk-based decisions. The insights an organization receives from this work can lose value over time if the data isn’t kept up-to-date and monitored for compliance. By leveraging data classification methods and risk formulas, organizations can reduce lag time, gain real time risk insights and standardize risk at scale. OneTrust GRC provides workflows to find, collect, document and classify data in real-time to gain meaningful risk insights and support compliance. ... What sets our GRC solution apart is that it is integrated into the entire OneTrust platform of trust. Trust differentiates as a business outcome, not simply a compliance exercise. Companies nowneed to mature beyond the tactical governance tools of the past and into a modern platform with centralized workflows that bring together all the elements of trust: privacy, data governance, ethics and compliance, GRC, third-party risk, and ESG. OneTrust does just that.


Indestructible Storage in the Cloud with Apache Bookkeeper

After researching what open source had to offer, we settled upon two finalists: Ceph and Apache BookKeeper. With the requirement that the system be available to our customers, scale to massive levels and also be consistent as a source of truth, we needed to ensure that the system can satisfy aspects of the CAP Theorem for our use case. Let’s take a bird’s-eye view of where BookKeeper and Ceph stand in regard to the CAP Theorem (Consistency, Availability and Partition Tolerance) and our unique requirements. While Ceph provided Consistency and Partition Tolerance, the read path can provide Availability and Partition Tolerance with unreliable reads. There’s still a lot of work required to make the write path provide Availability and Partition Tolerance. We also had to keep in mind the immutable data requirement for our deployments. We determined Apache BookKeeper to be the clear choice for our use case. It’s close to being the CAP system we require because of its append only/immutable data store design and a highly replicated distributed log.


Quote for the day:

"Ninety percent of leadership is the ability to communicate something people want." -- Dianne Feinstein

Daily Tech Digest - April 28, 2021

The Rise of Cognitive AI

There is a strong push for AI to reach into the realm of human-like understanding. Leaning on the paradigm defined by Daniel Kahneman in his book, Thinking, Fast and Slow, Yoshua Bengio equates the capabilities of contemporary DL to what he characterizes as “System 1” — intuitive, fast, unconscious, habitual, and largely resolved. In contrast, he stipulates that the next challenge for AI systems lies in implementing the capabilities of “System 2” — slow, logical, sequential, conscious, and algorithmic, such as the capabilities needed in planning and reasoning. In a similar fashion, Francois Chollet describes an emergent new phase in the progression of AI capabilities based on broad generalization (“Flexible AI”), capable of adaptation to unknown unknowns within a broad domain. Both these characterizations align with DARPA’s Third Wave of AI, characterized by contextual adaptation, abstraction, reasoning, and explainability, with systems constructing contextual explanatory models for classes of real-world phenomena. These competencies cannot be addressed just by playing back past experiences. One possible path to achieve these competencies is through the integration of DL with symbolic reasoning and deep knowledge.


Singapore puts budget focus on transformation, innovation

Plans are also underway to enhance the Open Innovation Platform with new features to link up companies and government agencies with relevant technology providers to resolve their business challenges. A cloud-based digital bench, for instance, would help facilitate virtual prototyping and testing, Heng said. The Open Innovation Platform also offers co-funding support for prototyping and deployment, he added. The Building and Construction Authority, for example, was matched with three technology providers -- TraceSafe, TagBox, and Nervotec -- to develop tools to enable the safe reopening of worksites. These include real-time systems that have enabled construction site owners to conduct COVID-19 contact tracing and health monitoring of their employees. Enhancements would alsobe made for the Global Innovation Alliance, which was introduced in 2017 to facilitate cross-border partnerships between Singapore and global innovation hubs. Since its launch, more than 650 students and 780 Singapore businesses had participated in innovation launchpads overseas, of which 40% were in Southeast Asia, according to Heng.


Machine learning security vulnerabilities are a growing threat to the web, report highlights

Most machine learning algorithms require large sets of labeled data to train models. In many cases, instead of going through the effort of creating their own datasets, machine learning developers search and download datasets published on GitHub, Kaggle, or other web platforms. Eugene Neelou, co-founder and CTO of Adversa, warned about potential vulnerabilities in these datasets that can lead to data poisoning attacks. “Poisoning data with maliciously crafted data samples may make AI models learn those data entries during training, thus learning malicious triggers,” Neelou told The Daily Swig. “The model will behave as intended in normal conditions, but malicious actors may call those hidden triggers during attacks.” Neelou also warned about trojan attacks, where adversaries distribute contaminated models on web platforms. “Instead of poisoning data, attackers have control over the AI model internal parameters,” Neelou said. “They could train/customize and distribute their infected models via GitHub or model platforms/marketplaces.”


Demystifying the Transition to Microservices

The very first step you should be taking is to embrace container technology. The biggest difference between a service-oriented architecture and a microservice-oriented architecture is that in the second one, the deployment is so complex, there are so many pieces with independent lifecycles, and each piece needs to have some custom configuration that it can no longer be managed manually. In a service-oriented architecture, with a handful of monolithic applications, the infrastructure team can still treat each of them as a separate application and manage them individually in terms of the release process, monitoring, health check, configuration, etc. With microservices, this is not possible with a reasonable cost. There will eventually be hundreds of different 'applications,' each of them with its own release cycle, health check, configuration, etc., so their lifecycle has to be managed automatically. There may be other technologies to do so, but microservices have become almost a synonym of containers. Not only Docker containers manually started, but you will also need an orchestrator. Kubernetes or Docker Swarm are the most popular ones.


Ransomware: don’t expect a full recovery, however much you pay

Remember also that an additional “promise” you are paying for in many contemporary ransomware attacks is that the criminals will permanently and irrevocably delete any and all of the files they stole from your network while the attack was underway. You’re not only paying for a positive, namely that the crooks will restore your files, but also for a negative, namely that the crooks won’t leak them to anyone else. And unlike the “how much did you get back” figure, which can be measured objectively simply by running the decryption program offline and seeing which files get recovered, you have absolutely no way of measuring how properly your already-stolen data has been deleted, if indeed the criminals have deleted it at all. Indeed, many ransomware gangs handle the data stealing side of their attacks by running a series of upload scripts that copy your precious files to an online file-locker service, using an account that they created for the purpose. Even if they insist that they deleted the account after receiving your money, how can you ever tell who else acquired the password to that file locker account while your files were up there?


Linux Kernel Bug Opens Door to Wider Cyberattacks

Proc is a special, pseudo-filesystem in Unix-like operating systems that is used for dynamically accessing process data held in the kernel. It presents information about processes and other system information in a hierarchical file-like structure. For instance, it contains /proc/[pid] subdirectories, each of which contains files and subdirectories exposing information about specific processes, readable by using the corresponding process ID. In the case of the “syscall” file, it’s a legitimate Linux operating system file that contains logs of system calls used by the kernel. An attacker could exploit the vulnerability by reading /proc/<pid>/syscall. “We can see the output on any given Linux system whose kernel was configured with CONFIG_HAVE_ARCH_TRACEHOOK,” according to Cisco’s bug report, publicly disclosed on Tuesday.. “This file exposes the system call number and argument registers for the system call currently being executed by the process, followed by the values of the stack pointer and program counter registers,” explained the firm. “The values of all six argument registers are exposed, although most system call use fewer registers.”


Process Mining – A New Stream Of Data Science Empowering Businesses

It is needless to emphasise that Data is the new Oil, as Data has shown us time on time that, without it, businesses cannot run now. We need to embrace not just the importance but sheer need of Data these days. Every business runs the onset of processes designed and defined to make everything function smoothly, which is achieved through – Business Processes Management. Each Business Process has three main pillars – Business Steps, Goals and Stakeholders, where series of Steps are performed by certain Stakeholders to achieve a concrete goal. And, as we move into the future where the entire businesses are driven by Data Value Chain which supports the Decision Systems, we cannot ignore the usefulness of Data Science combined with Business Process Management. And this new stream of data science is called Process Mining. As quoted by Celonis, a world-leading Process Mining Platform provider, that; “Process mining is an analytical discipline for discovering, monitoring, and improving processes as they actually are (not as you think they might be), by extracting knowledge from event logs readily available in today’s information systems.


Alexandria in Microsoft Viva Topics: from big data to big knowledge

Project Alexandria is a research project within Microsoft Research Cambridge dedicated to discovering entities, or topics of information, and their associated properties from unstructured documents. This research lab has studied knowledge mining research for over a decade, using the probabilistic programming framework Infer.NET. Project Alexandria was established seven years ago to build on Infer.NET and retrieve facts, schemas, and entities from unstructured data sources while adhering to Microsoft’s robust privacy standards. The goal of the project is to construct a full knowledge base from a set of documents, entirely automatically. The Alexandria research team is uniquely positioned to make direct contributions to new Microsoft products. Alexandria technology plays a central role in the recently announced Microsoft Viva Topics, an AI product that automatically organizes large amounts of content and expertise, making it easier for people to find information and act on it. Specifically, the Alexandria team is responsible for identifying topics and rich metadata, and combining other innovative Microsoft knowledge mining technologies to enhance the end user experience.


How Vodafone Greece Built 80 Java Microservices in Quarkus

The company now has 80 Quarkus microservices running in production with another 50-60 Spring microservices remaining in maintenance mode and awaiting a business motive to update. Vodafone Greece’s success wasn’t just because of Sotiriou’s technology choices — he also cited organizational transitions the company made to encourage collaboration. “There is also a very human aspect in this. It was a risk, and we knew it was a risk. There was a lot of trust required for the team, and such a big amount of trust percolated into organizing a small team around the infrastructure that would later become the shared libraries or common libraries. When we decided to do the migration, the most important thing was not to break the business continuity. The second most important thing was that if we wanted to be efficient long term, we’d have to invest in development and research. We wouldn’t be able to do that if we didn’t follow a code to invest part of our time into expanding our server infrastructure,” said Sotiriou. That was extra important for a team that scaled from two to 40 in just under three years.


The next big thing in cloud computing? Shh… It’s confidential

The confidential cloud employs these technologies to establish a secure and impenetrable cryptographic perimeter that seamlessly extends from a hardware root of trust to protect data in use, at rest, and in motion. Unlike the traditional layered security approaches that place barriers between data and bad actors or standalone encryption for storage or communication, the confidential cloud delivers strong data protection that is inseparable from the data itself. This in turn eliminates the need for traditional perimeter security layers, while putting data owners in exclusive control wherever their data is stored, transmitted, or used. The resulting confidential cloud is similar in concept to network micro-segmentation and resource virtualization. But instead of isolating and controlling only network communications, the confidential cloud extends data encryption and resource isolation across all of the fundamental elements of IT, compute, storage, and communications. The confidential cloud brings together everything needed to confidentially run any workload in a trusted environment isolated from CloudOps insiders, malicious software, or would-be attackers.



Quote for the day:

"Lead, follow, or get out of the way." -- Laurence J. Peter

Daily Tech Digest - April 27, 2021

Engineering Bias Out of AI

Removing bias from AI is not easy because there’s no one cause for it. It can enter the machine-learning cycle at various points. But the logical and most promising starting point seems to be the data that goes into it, says Ebert. AI systems rely on deep neural networks that parse large training data sets to identify patterns. These deep-learning methods are roughly based on the brain’s structure, with many layers of code linked together like neurons, and weights given to the links changing as the network picks up patterns. The problem is, training data sets may lack enough data from minority groups, reflect historical inequities such as lower salaries for women, or inject societal bias, as in the case of Asian-Americans being labeled foreigners. Models that learn from biased training data will propagate the same biases. But collecting high-quality, inclusive, and balanced data is expensive. So Mostly AI is using AI to create synthetic data sets to train AI. Simply removing sensitive features like race or changing them—say, increasing female salaries to affect approved credit limits—does not work because it interferes with other correlations.


Five types of thinking for a high performing data scientist

System-as-cause thinking - a pattern of thinking that determines what to include within the boundaries of our system (I.e., extensive boundary) and the level of granularity of what is to be included (I.e., intensive boundary). The extensive and intensive boundaries depend on the context in which we are analyzing the system and what is under the control of the decision maker vs what is outside their control. Data scientists typically work with whatever data that has been provided to them. While it is a good starting point, we also need to understand the broader context around how a model will be used and what is it that the decision maker can control or influence. For example, when building a robo-advice tool we could include a number of different aspects ranging from macro-economic indicators, asset class performance, company investment strategies, individual risk appetite, life-stage of the individual, health condition of the investor etc. The breadth and depth of factors to be included depends on whether we are building a tool for an individual consumer, an advisor, a wealth management client, or even a policy maker in the government. 

A software bug let malware bypass macOS’ security defenses

“The malware we uncovered using this technique is an updated version of Shlayer, a family of malware that was first discovered in 2018. Shlayer is known to be one of the most abundant pieces of malware on macOS so we’ve developed a variety of detections for its many variants, and we closely track its evolution,” Bradley told TechCrunch. “One of our detections alerted us to this new variant, and upon closer inspection we discovered its use of this bypass to allow it to be installed without an end user prompt. Further analysis leads us to believe that the developers of the malware discovered the zero-day and adjusted their malware to use it, in early 2021.” Shlayer is an adware that intercepts encrypted web traffic — including HTTPS-enabled sites — and injects its own ads, making fraudulent ad money for the operators. “It’s often installed by tricking users into downloading fake application installers or updaters,” said Bradley. “The version of Shlayer that uses this technique does so to evade built-in malware scanning, and to launch without additional ‘Are you sure’ prompts to the user,” he said.


Top 3 Challenges for Data & Analytics Leaders

First and foremost, basic learning dictates that you can’t use data to drive every action until you give every decision maker access to data and the tools to act on it. In essence, you have to approach data strategically — in a way that makes it available across departments and business users. This will amp up the data literacy and embed fact-based decision making into organizational culture. Secondly, I’ve also been known to have TV screens installed that show the latest dashboards to encourage executive buy-in. Now, executives stand in the hallway to consult them daily; it publicizes how leadership makes decisions and sets an example for the entire enterprise. ... Hard benefits such as new revenue streams, improved operations, improved customer engagement, cost reduction and risk avoidance can be quantified. This can be achieved by using financial models that incorporate cost, benefits and risks to get to the ROI, net present value (NPV), and payback period. However, the causal link between D&A and soft benefits— like productivity gains or continued innovation across the organization from culture change — remains elusive for me.


My Favorite Microservice Design Patterns for Node.js

Getting a microservice-based architecture to work around asynchronous events can give you a lot of flexibility and performance improvements. It’s not easy, since the communication can get a little bit tricky and debugging problems around it even more so. Since now there is no clear data flow from Service 1 to Service 2. A great solution for that is to have an event ID created when the client sends its initial request, and then propagated to every event that stems from it. That way you can filter around the logs using that ID and understand every message generated from the original request. Also, note that my diagram above shows the client directly interacting with the message queue. This can be a great solution if you provide a simple interface or if your client is internal and managed by your dev team. However, if this is a public client that anybody can code, you might want to provide them with an SDK-like library they can use to communicate with you. Abstracting and simplifying the communication will help you secure the workflow and provide a much better developer experience for whoever is trying to use them.


Essential Rules For Autonomous Robots To Drive A Conventional Car

A driving robot could be more readily shared around and used on a widespread basis. A true self-driving car with built-in capabilities is merely one car. A driving robot could drive any conventional car. As such, the driving robot has greater utility, plus the cost of the driving robot can be spread amongst a multitude of users or owners in a more versatile way than could a singular self-driving car. A driving robot might provide additional uses. A true self-driving car has just one purpose, ostensibly it is a car that drives and that is all that it does (though, notably, this is a darned impressive act!). A driving robot might be able to perform other tasks, such as being able to get out of the car and carry a delivery package to the door of a house. Note that this is not a requirement per se and merely identifiable as potential added use that might be devised. There are also various disadvantages of using a driving robot versus aiming to utilize or craft a true self-driving car, which I won’t delineate those shortcomings here. I urge you to take a look at my earlier article on the topic to see the articulated list of downsides or drawbacks.


Reinforcement Learning for Production Scheduling

Given a set of heterogeneous machines and a set of heterogeneous production jobs, compute the processing schedule that minimizes specified metrics. Heterogeneous means that both, the machines and the jobs can have different properties, e.g. different throughput for the machines and different required processing time for the jobs, and many more in practice. Additionally, the real problem is complicated by a set of imposed constraints, e.g. jobs of class A cannot be processed on machines of class B, etc. Theoretically, this problem is a complicated instance of the “Job scheduling” problem which together with the “Capacitated vehicle routing” is considered to be a classic of Combinatorial Optimization (CO). Though this problem is NP-hard (no exact solution in polynomial time), it is rather well-studied by the CO community, which offers a handful of methods to solve its theoretical (simplified) version. However, the majority of the methods cannot cope with real-world problem sizes or additional constraints that I’ve mentioned above. That is why most of the time people in the industry resort to some form of stochastic search combined with domain-specific heuristics.


How to empower your chief information security officer (CISO)

The new remote working environments that have been ushered in as a result of the pandemic has expanded the attack surface, meaning a need for added visibility over the network for the CISO. According to Adam Palmer, chief cyber security strategist at Tenable, clear communication with the organisation’s board about possible risks can go a long way in empowering security leadership. “CISOs will need to be aware, and effectively list the vulnerabilities before they inform the board of directors of what is being done and how to reduce and address them,” said Palmer. “By using a risk-based approach CISOs can profile the distributed risk across the extended enterprise, and explain this in the boardroom in the same business terms other functions use so all can understand and evaluate any controls that need to be implemented to address that risk effectively and cost-efficiently. “It will be tempting for management to purchase additional tools to alleviate the overall risk levels, and it is important to remember that a magic bullet is not the only solution.


IT Meets Finance: How CFOs Drive Digital Success

CFOs are no longer bean counters with a fierce grip on the checkbook. Being a CFO today is about leadership -- understanding the growth levers that drive the business and the investments needed to get there. Right now, that growth driver is digital transformation. CFOs now must have a strong understanding of technology. CFOs are data-driven and are using predictive analytics and machine learning to ensure initiatives are driving real impact. CFOs should ask for data that indicates transformation efforts are maximizing ROI and driving tangible value across the business. They're looking to quantify the success of their digital transformation investments. ... CFOs are central to strategic decisions about transformation. They are focused on helping their companies not only survive the current climate but also come out stronger on the other side. While it can be hard for organizations to overhaul in the midst of uncertainty, it's the CFO's job to really advocate for and invest in projects that will push the business forward. CFOs can ensure investments impact every aspect of the business and drive more engagement and commitment from business leaders, ultimately ensuring better success.


Attackers can teach you to defend your organization against phishing

Should the attacker’s email manage to evade your mail gateway, the goal is to trick an employee into performing an action that executes a malicious payload. This payload is designed to exploit a vulnerability and provide the attacker with access to the environment. Ideally, you’ve got code execution policies in place so only certain types of files can be executed. You can prevent anything that’s delivered by email to be executed, to restrict things as much as you possibly can. The attacker knows this and is constantly trying to work around it, which is why you need to maintain an ability to detect the execution of malicious payloads from phishing emails on employee endpoints. But how? Design and frequently run test cases that simulate malicious payloads being executed on your employee endpoints. Monitor logs and alerts when performing code execution test cases to validate that you have both the necessary coverage and telemetry to recognize indicators of compromise. Where blind spots in telemetry are identified, develop and validate new detection use cases.



Quote for the day:

"Coaching is unlocking a person's potential to maximize their own performance. It is helping them to learn rather than teaching them." -- John Whitmore

Daily Tech Digest - April 26, 2021

Technology, Management or Data? How To Choose Your Career

Dr Sarkar stated, “Given the current job landscape, the methods of pursuing degrees and programs are evolving and getting digitised. Let’s take a look at a couple of examples, such as MTech Programs and MBA Programs. Classically, they are about depth in specific branches. Now, an MTech program or MBA program no longer includes only the core disciplines, rather, it has become a fusion of important disciplines required to solve real-time problems.” “Therefore, even if you are trying to think of yourself as going through the technology route, doing an MTech program, you will probably end up doing a fair amount of business applications using data, given the kinds of projects and courses. Similarly for an MBA program, you will go beyond the core disciplines and you will also use data and technology. The traditional programs are evolving to fit today’s workplace,”he added. At present, data holds a special place among organisations. This is one of the reasons why data is embedded within the current programs. The deployment of data is done through technology, for instance through cloud-based applications, he said.


Connected medical devices brought security loopholes mainstream

First, when it comes to firmware updates, it is advisable to initiate an orchestrated process that ensures only authorized administrators can make changes to the device and that the update is applied properly. An update failure should trigger an alert so the device can be otherwise secured or replaced by another device. Second, for patients, cybersecurity leaders must give clear instructions on how to install and configure the device as well as the home network. This will translate into proper operation and a secure connection to transmit encrypted data from patient to doctor. One potential solution is to tailor the device connection type. For example, peer-to-peer connections bypass the public cloud to deliver encrypted information between user and device. Third, for devices, strong authentication with public key schemes is a must. Similar to what is used by online banks, public key authentication uses cryptographic keys to identify and authenticate peers instead of a username and password. Using cryptographic keys for authentication has the advantage that they are practically impossible to brute-force crack and do not require the user to remember anything.


Clean Code for Data Scientist

The number one reason, from my experience, is the nature of our work being “high-risk”. Meaning, when we write the first line of code in the script, we usually don’t know what will happen with it — Will it work? Will it be in production? Will we use it ever again? Is it worth anything? We might end up spending much of our time on risky POCs or one-time data explorations. In those cases, writing the neatest time-consuming code might not be the right way to go. But then, this POC we wrote in a sketchy fashion turns into an actual project, it even gets to production, and its code is a mess! Sounds familiar? Used to happen to me all the time. ... What’s common to all code writers out there is the time aspect. Writing clean code costs more time in the first place since you need to think twice before writing any line of code. We’re always pushed or encouraged to get things done, fast, and it might come at the expense of our code. Just remember — getting things done fast, while in a hurry, can come to bite you later when you’re dealing with bugs on a daily basis. Your time spent writing clean code will for sure pay for itself in the time saved on bugs.


Stop using your work laptop or phone for personal stuff

In the age of remote work, it's easier than ever to blur the lines between our personal and professional tech. Maybe it's sending personal texts or emails from your work phone, editing personal documents or photos on your work laptop, or joining a virtual happy hour with friends from your work tablet. None of these actions may sound like a particularly risky activity, but as a former "IT guy" I'm asking, nay pleading, with you to stop doing them. At least the potentially more hazardous activities, such as storing personal data on your work machine or storing sensitive company data on your personal devices. Do it for the security of your employer. But more importantly, do it for the safety, privacy and wellbeing of yourself, your family and friends. Cybersecurity incidents can have serious negative consequences for both your employer and you. And even if an actual security breach or data leak doesn't occur, you could be reprimanded, demoted, fired, sued or even criminally prosecuted. Take the case of former CIA director John M. Deutch.


The cyber security mesh: how security paradigms are shifting

Without a doubt, the cyber security teams in your business are finding themselves in an increasingly complex situation. The adoption of the cyber security mesh has been effectively accelerated by several drivers, including digital initiatives and the opportunity to take advantage of IoT, AI, advanced analytics and the cloud. These drivers, along with the demand for increased flexibility, reliability and agility, have led more and more businesses to adopt a cyber security mesh. This distributed cyber security approach offers a much-needed chance for increased reliability, flexibility and scalability. ... Ultimately, the continued breakdown of the traditional technology stack with elevated virtualisation of services means the way organisations look to protect themselves is set for an upgrade. Effective cyber security is about being able to match and marry your protection to the circumstances in the world around it. As a society, as technology and even government policy begins to change, so will your points of exposure. Of course, the past year has seen an acceleration in these changes, and this has demonstrated that businesses should be as prepared for the unlikely as they are for the likely, which is exactly what a robust cyber security plan should look like.


Best practices  - code review & test automation.

Doing test automation is about writing code. Test automation code can be easily treated as “second-class citizens”. As it’s not delivered to customers, development is often less formalized and may lack the scrutiny and quality practices otherwise applied in the organization. Lately, I’ve been doing lots of code reviews. ...  All of the reviews exclusively cover end-to-end test automation: new tests, old fixes, config changes, and framework updates. I adamantly believe that test automation code should undergo the same scrutiny of review as the product code it tests because test automation is a product. Thus, all of the same best practices should be applied. Furthermore, I also look for problems that, anecdotally, seem to appear more frequently in test automation than in other software domains. Code review is a very important phenomenon in the SOFTWARE development process. Made popular by the open-source community, it is now the standard for any team of developers. If it is executed correctly, the benefit is not only in reducing the number of bugs and better code quality but also in the training effect for the programmer.


What You Might Not Realize About Your Multi-Cloud Model

It’s common knowledge in the tech world that the vast majority of organizations shifting to public cloud are adopting a mix of hybrid and multi-cloud operating models as part of their cloud strategy. In response, all three of the major cloud providers, including Amazon, Microsoft, and Google, are expanding their service offerings for this positioning. Correspondingly, the market is seeing an uptick in customers that are using these hybrid and multi-cloud environments. According to a recent Gartner research statistic, over 80% of enterprises characterize their strategy as multi-cloud. This can run the gamut from organizations deploying a combination of providers to create a multi-cloud network to firms implementing five or more unique public cloud environments. In reality, while these organizations think they are operating in a multi-cloud environment, they are simply operating “multiple clouds.” This is more than just semantics: Multiple cloud does not equal multi-cloud. And not understanding the nuances may leave a lot on the table when it comes to a CIO managing enterprise IT.


Shift Left: From Concept to Practice

Because developers work with code and in Git, it is logical to apply security controls in Git. Looking at secrets leaks, shifting left means automatically detecting secrets in the code and allowing the different members of the SDLC to collaborate. Remediating secrets leaked in Git repositories is a shared responsibility among developers, operations, and application security (if the secret is exposed in internal code repos) or threat response (if the secret is exposed externally). The processes depend on the organization's size, culture, and how it splits responsibilities among teams. They all need one another, but developers are on the front line. They often know what the leaked secret gives access to. However, they can't always revoke the secret alone because it might affect production systems or fellow developers using the same credentials. Also, it's not only about revoking; it's also about redistributing (rotating), which falls under operations' responsibilities. While remediating, it is also important to keep security professionals' eyes on the issue. They can guarantee that proper remediation steps are followed and guide and educate developers on the risks.


Agile management: How this new way of leading teams is delivering big results

Porter says BP's initial implementations of Agile have helped the company to embed its working processes into various areas of the business. He says the benefits of an Agile way of working are clear. "'It's really liberating' is what we're hearing from the various pilots and work that has started already," he says. "So we're seeing that play out in the broader BP and getting some really good indications back from where we have used Agile in the past and what's coming at us as we embed our design throughout the organisation." The introduction of Agile leadership isn't without its challenges. As managers empower their teams, so they stop being involved in the minutiae of decision-making processes. Get Agile management wrong and there's the possibility for chaos and anarchy. Good Agile managers don't use command-and-control approaches to manage their staff, but they do focus on fostering accountability. Porter says BP wants to avoid diluting the devolved decision-making processes that Agile encourages. Teams at the company are typically organised into small groups of between 10 and 20 people, depending on the organisational context.


How to Prioritize Your Product Backlog

Your product backlog should be a list of all the product-related tasks your team needs to complete, including the division of responsibility and the time frame. The problem is that the list is not intended to be conclusive. It needs to be flexible and will change according to the other things that are happening. For example, a hotshot product release by a competitor may mean you need to release a product update earlier than expected to compete, meaning everything else gets pushed back. Even events such as attending conferences (virtual or in-person) may mean that teams may prioritize sales and marketing visible tasks in an effort to connect with new customers. But the problem occurs when tasks keep getting pushed below the list, and the product manager struggles to maintain momentum to review and organize all the tasks in preference and priority. An effective product backlog list needs to be well structured, organized to be easily read and understood, and arranged to meet the company's strategic needs.



Quote for the day:

"Strong leaders encourage you to do things for your own benefit, not just theirs." -- Tim Tebow