Daily Tech Digest - December 18, 2020

Chaos Engineering: A Science-based Approach to System Reliability

While testing is standard practice in software development, it’s not always easy to foresee issues that can happen in production. Especially as systems become increasingly complex to deliver maximum customer value. The adoption of microservices enables faster release times and more possibilities than we’ve ever seen before, however they introduce challenges. According to the 2020 IDG cloud computing survey, 92 percent of organizations’ IT environments are at least somewhat in the cloud today. In 2020, we saw highly accelerated digital transformation as organizations had to quickly adjust to the impact of a global pandemic. With added complexity comes more possible points of failure. The trouble is that we humans managing these intricate systems cannot possibly understand or foresee all of the issues because it’s impossible to understand how each of the individual components of a loosely coupled architecture will relate to each other. This is where Chaos Engineering steps in to proactively create resilience. The major caveat of Chaos Engineering is that things are broken in a very intentional and controlled manner while in production, unlike regular QA practices, where this is done in safe development environments. It is methodical and experimental and less ‘chaotic’ than the name implies.


ECLASS presents the Distributed Ledger-based Infrastructure for Industrial Digital Twins

Advancing digitalization, increasing networking and horizontal integration in the areas of purchasing, logistics and production, as well as in the engineering, maintenance and operation of machines and products, are creating new opportunities and business models that were unimaginable before. Classic value chains are turning more and more into interconnected value networks in which partners can seamlessly find and exchange the relevant information. Machines, products and processes receive their Digital Twins, which represent all relevant aspects of the physical world in the information world. The combination of physical objects and their Digital Twins creates so-called Cyber Physical Systems. Over the complete lifecycle, the relevant product information and production data captured in the Digital Twin must be available to the partners in the value chain at any time and in any place. The digital representation of the real world in the information world, in the form of Digital Twins, is therefore becoming increasingly important. However, the desired horizontal and vertical integration and cooperation of all participants in the value network across company boundaries, countries, and continents can only succeed on the basis of common standards


Data Protection Bill won’t get cleared in its current version

Pande from Omidyar Network India said stakeholders of the data privacy regulations should consider making the concept of consent more effective and simple. The National Institute of Public Finance and Policy (NIPFP) administered a quiz in 2019 to test how well urban, English speaking college students understand privacy policies of Flipkart, Google, Paytm, Uber, and WhatsApp. The students only scored an average of 5.3 out of 10. The privacy policies were as complex as a Harvard Law Review paper, Pande said. Facebook’s Claybaugh, however, said that “despite the challenges of communicating with people about privacy, we do take pretty strong measures both in our data policy which is interactive, in relatively easy-to-understand language compared to, kind of, the terms of service we are used to seeing.” Lee, who earlier worked with Singapore’s Personal Data Protection Commission said challenges of a (DPA) are “manifold”. She said it must be ensured that the DPA is “independent” and is given necessary powers especially when it must regulate the government. The DPA must be staffed with the right people with knowledge of technical and legal issues involved, she added.


India approves game-changing framework against cyber threats

The office of National Security Advisor Ajit Doval, sources said, noted that with the increasing use of Internet of Things (IoT) devices, the risk will continue to increase manifold and the advent of 5G technology will further increase the security concerns resulting from telecom networks. Maintaining the integrity of the supply chain, including electronic components, is also necessary for ensuring security against malware infections. Telecom is also the critical underlying infrastructure for all other sectoral information infrastructure of the country such as power, banking and finance, transport, governance and the strategic sector. Security breaches resulting in compromise of confidentiality and integrity of information or in disruption of the infrastructure can have disastrous consequences. Sources said that in view of these issues, the NSA office had recommended a framework -- 'National Security Directive on Telecom Sector', which will address 5G and supply chain concerns. Under the provisions of the directive, in order to maintain the integrity of the supply chain security and in order to discourage insecure equipment in the network, government will declare a list of 'Trusted Sources/Trusted Products' for the benefit of the Telecom Service Providers (TSPs).


The case for HPC in the enterprise

Essentially, HPC is an incredibly powerful computing infrastructure built specifically to conduct intensive computational analysis. Examples include physics experiments that identify and predict black holes. Or modeling genetic sequencing patterns against disease and patient profiles. In the past year, the Amaro Lab at UC San Diego performed modeling on the COVID-19 coronavirus to an atomic level using one of the top supercomputers in the world at the Texas Advanced Computing Center (TACC). I hosted a webinar with folks from UCSD, TACC and Intel discussing their work here. Those types of compute intensive workloads are still happening. However, enterprises are also increasing their demand for compute intensive workloads. Enterprises are processing increasing amounts of data to better understand customers and business operations. At the same time, edge computing is creating an explosive number of new data sources. Due to the sheer amount of data, enterprises are leveraging automation through the form of machine learning and artificial intelligence to parse the data and gain insights while making faster and more accurate business decisions. Traditional systems architectures are simply not able to keep up with the data tsunami.


5 reasons IT should consider client virtualization

First is the compatibility to run different operating systems or different versions of the same operating system. For example, many enterprise workers are increasingly running applications that are cross-platform such as Linux applications for developers, Android for healthcare or finance, and Windows for productivity. Second is the potential to isolate workloads for better security. Note that different types of virtualization models co-exist to support the diverse needs of customers (and applications in general are getting virtualized for better cloud and client compatibility). The focus of this article is full client virtualization that enables businesses to take complete advantage of the capabilities of rich commercial clients including improved performance, security and resilience. Virtualization in the client is different from virtualization in servers. It’s not just about CPU virtualization, but also about creating a good end-user experience with, for example, better graphics, responsiveness of I/O, network, optimized battery life of mobile devices and more. A decade ago, the goal of client virtualization was to use a virtual machine for a one-off scenario or workload.


The top 6 use cases for a data fabric architecture

A data fabric architecture promises a way to deal with many of the security and governance issues being raised by new privacy regulations and the rise in security breach incidents. "By far the largest positive impact of a data fabric for organizations is the focus on enterprise-wide data security and governance as part of the deployment, establishing it as a fundamental, ongoing process," said Wim Stoop, director of product marketing at Cloudera. Data governance is often seen in isolation, tied to a use case like tackling regulatory compliance needs or departmental requirements in isolation. With a data fabric, organizations are required to take a step back and consider data management holistically. This delivers the self-service access to data and analytics businesses demand to experiment and quickly drive value from data. Such a degree of management, governance and security of data then also makes proving compliance -- both industry and regulatory -- more or less a side effect of having implemented the fabric itself. Although this is not a full solution, it greatly reduces the effort associated with adhering to compliance requirements. Platz cautioned that there is a wide gulf between a vision for a perfect data fabric and what is practical today. "In practice, many first versions of data fabric architectures look more like just another data lake," Platz said.


Malicious Browser Extensions for Social Media Infect Millions of Systems

"This could be used to gather credentials and other sensitive corporate data from the websites visited by the victim," he says. "We are preparing a technical blog post with more technical information and IoCs, but for now, we can share the ... malicious domains." The malicious extensions are the latest attempt by cybercriminals to hide code in add-ons for popular browsers. In February, independent researcher Jamila Kaya and Duo Security announced they had discovered more than 500 Chrome extensions that infected millions of users' browsers to steal data. In June, Awake Security reported more than 70 extensions in the Google Chrome Web store were downloaded more than 32 million times and which collected browsing data and credentials for internal websites. In its latest research, Avast found the third-party extensions would collect information about users whenever they clicked on a link, offering attackers the option to send users to an attacker-controlled URL before forwarding them to their destinations. The extensions also collect the users' birthdates, e-mail addresses, and information about the local system, including name of the device, its operating system, and IP addresses.


How to use Agile swarming techniques to get features done

Teams that concentrate on individual skills and tasks end up with some members far ahead and others grinding away at unfinished work. For example, a back-end developer is still working on a feature, while the front-end developer for that feature has finished coding. The front-end developer then starts coding the next feature. The team can design hooks into the code to let the front-end developers validate their work. However, a feature is not done until a team completes the whole thing, fully integrates it and tests it. Letting developers move asynchronously through the project might result in good velocity metrics, but those measures don't always translate to the team delivering the feature on time. If testers discover issues in a delivered feature, the entire team must return to already completed tasks. Let this scenario play out in a real software organization, and you end up with partially completed work on many disparate tasks, and nothing finished. The goal of Agile development is not to ensure the team is 100% busy, with each person grabbing new product backlog items as soon as they complete their prior task. This approach to development results in extensive multitasking and ultimately slows the flow of completed items.


Application Level Encryption for Software Architects

Unless well-defined, the task for application-level encryption is frequently underestimated, poorly implemented, and results in haphazard architectural compromises when developers find out that integrating a cryptographic library or service is just the tip of the iceberg. Whoever is formally assigned with the job of implementing encryption-based data protection, faces thousands of pages of documentation on how to implement things better, but very little on how to design things correctly. Design exercises turn out to be a bumpy ride every time you don’t expect the need for design and have a sequence of ad-hoc decisions because you anticipated getting things done quickly: First, you face key model and cryptosystem choice challenges, which hide under “which library/tool should I use for this?” Hopefully, you chose a tool that fits your use-case security-wise, not the one with the most stars on GitHub. Hopefully, it contains only secure and modern cryptographic decisions. Hopefully, it will be compatible with other team’s choices when the encryption has to span several applications/platforms. Then you face key storage and access challenges: where to store the encryption keys, how to separate them from data, what are integration points where the components and data meet for encryption/decryption, what is the trust/risk level toward these components?



Quote for the day:

"Public opinion is no more than this: What people think that other people think." -- Alfred Austin

Daily Tech Digest - December 17, 2020

Danske Bank fights money laundering with AI

The bank has been on a mission to transform its anti-money laundering capabilities after substantial illegal transaction activity was uncovered at a Danske Bank subsidiary in Estonia. The Baltic state made global headlines when Danske Bank was found to have transferred more than €200bn of suspicious money through its tiny branch in Tallinn. Danske Bank’s transaction-focused AML-detection apparatus proved sluggish in identifying account and money transfer divergence anomalies at the branch in Estonia’s capital. During the pilot of the platform from Quantexa, Danske Bank improved its ability to detect suspicious activity in its market trading business in areas such as foreign exchange, securities and equities.  “Harnessing technology enables us to identify complex financial crime behaviours more effectively. Running advanced analytics on a wide range of datasets can help us better detect, investigate and prevent financial crime,” said Satnam Lehal, head of financial crime detection at Danske Bank.  The bank has now taken the pilot project to the next stage and integrated it with existing infrastructure.


Cybersecurity: A Bleak 'Progress' Report

The hack appears to be the result of the successful insertion of malware into SolarWinds' recent updates. The SolarWinds technology is used by the Pentagon, all five branches of the U.S. military, the State Department, NASA, the NSA, the Postal Service, the National Oceanic Atmospheric Administration, the Department of Justice and the Office of the President of the United States, among others. Plus, it's installed at thousands of companies and organizations around the world. Perhaps these events will serve as a wake-up call to our citizenry and bring the severity of our exposure to adversarial cyberattacks into a more focused picture. Today, I'm recalling watching "60 Minutes" in February 2015 - an episode titled: "DARPA: Nobody's Safe on the Internet" with Leslie Stahl interviewing some intriguing guests. A review of that segment helps to illustrate that some efforts back then were misguided, and cybersecurity remains very much a work in progress. The "60 Minutes" report said the Department of Defense had put Dan Kaufman, a former video game developer, in charge of inventing technology to secure the internet and given him a staff of 25 "brainiacs" and a budget of $500 million a year to "do something to help national security" and "make the world a better, safer, more secure place."


SolarWinds attack explained: And why it was so hard to detect

The attackers kept their malware footprint very low, preferring to steal and use credentials to perform lateral movement through the network and establish legitimate remote access. The backdoor was used to deliver a lightweight malware dropper that has never been seen before and which FireEye has dubbed TEARDROP. This dropper loads directly in memory and does not leave traces on the disk. Researchers believe it was used to deploy a customized version of the Cobalt Strike BEACON payload. Cobalt Strike is a commercial penetration testing framework and post-exploitation agent designed for red teams that has also been adopted and used by hackers and sophisticated cybercriminal groups. To avoid detection, attackers used temporary file replacement techniques to remotely execute their tools. This means they modified a legitimate utility on the targeted system with their malicious one, executed it, and then replaced it back with the legitimate one. A similar technique involved the temporary modification of system scheduled tasks by updating a legitimate task to execute a malicious tool and then reverting the task back to its original configuration.


3 Ways to Build a Buzzing Blockchain Community

Holding a hackathon is a sure-fire way of attracting programming talent and development activity to a blockchain platform. The concept is popular among many kinds of tech companies and developers. In the context of blockchain, a platform’s founders or operators will usually offer some type of incentive to participants in exchange for their creative contributions. Often, hackathons have been held as part of blockchain-developer conferences or events. However, with the social restrictions currently in place, many projects are forging ahead with remote hackathons, and with a high level of participation and success. ... Blockchain and cryptocurrencies are chock-full of jargon, acronyms and technical concepts that can be off-putting to many newcomers. Not to mention, starting to invest or trade in cryptocurrency comes with particular risks that everyone should be aware of before they start.  For this reason, those companies that can provide newcomers with a comprehensive library of educational materials have an opportunity to inspire loyalty from newcomers. One example is Indian cryptocurrency exchange Bitbns, which has created its own multimedia academy targeted at those eager to learn about cryptocurrency.


How To Support a Zero Trust Model with Automation

Zero Trust security architectures are underpinned by a segmented environment that limits the movements of attackers, isolating them and reducing their impact. In a Zero Trust environment all users, devices and applications are assigned a profile, based on techniques like digital identity, device health verification and application validation. They are then granted restricted access based on their profile. This prevents attackers from moving around freely, but it is also a precise and methodical process that provides and maintains genuine users and resources access, as and when required. Enforcing granular permissions based on the profile of the user or digital asset is a complex process. Automation provides the means to perform these actions programmatically and at scale. It also ensures that new technologies and solutions can be constantly added either to support, or replace, existing systems to ensure actions are performed in line with Zero Trust policies. Faced with a backdrop of emergent threats and an enterprise environment that is constantly evolving, automation provides the means to adapt quickly. Policies can be rewritten, while systems and processes can be reconfigured to deal with sudden changes or requests. This can be done at a micro level, dealing with very specific issues, or on a broader macro level, dealing with enterprise-wide issues.


VPNs, MFA & the Realities of Remote Work

As organizations have adapted to remote work and adopted new solutions, it's critical they understand how their architecture has changed in order to identify the evolving threat surface. But it's also important to realize that an IT architecture is like a fingerprint; there are some common types, but ultimately, they're unique. VPN is more effective for an on-premises environment, while MFA is more effective for a cloud-based setup. Let's take VPNs as an example. The most straightforward use case of a VPN is to establish a secure connection to access corporate infrastructure. You're at home, on your own wireless network, but you connect through a VPN. The VPN is protected by a firewall device to access the corporate network. This model works well for organizations that have a data center and file servers on-site because they can still leverage their network perimeter to protect it. However, VPN traffic can get more challenging when you consider the scale of larger organizations. Once hundreds of remote employees are connecting through VPN, the burden of moving data to a point which it can be distributed over network traffic can become significant.


CISOs should be ready to confront the psychology of cybersecurity in 2021

Clearly, we need to introduce more support around security awareness, understand the pressures that employees are under, whether that be self-inflicted or whether that be because of some external factors that are going on. This one is also the real challenge of security folks. We’re still not that good at that kind of emotional intelligence. We love a process, we love a policy. But we’re still not very great at this touchy, feely, fluffy emotional space. There’s a real role here for a human resources professional to get engaged to help deal with this one. ... There is a real leadership opportunity there to create the right environment that encourages people to talk about some of those issues. We’ve seen some real progress in that space. Because let’s face it we all have good days and bad days. I think encouraging people to talk about that, to share those things is hugely important as is encouraging people to take breaks, move away from the screen. We’ve moved into a realm that those kind of things are really important for us to be picking up on. Some of us are doing it quite naturally, perhaps, but they are not skillsets that are the strong suits for CISOs and security professionals.


Cost savings and security are key drivers of MSP adoption

When it comes to budgeting, the three areas of information technology noted as incurring the most expense by SMB and mid-market company executives are: information security (60%), data storage (48%), and data analytics/business intelligence (45%). The IT areas incurring the least expense in budgets are: application licensing (24%), digital transformation (21%), and data center automation (21%). “We conducted this survey to better understand the dynamics of how business executives perceive and select MSPs, as well as how MSPs can provide the greatest value to their SMBs and mid-market customers,” said Infrascale CEO Russell P. Reeder. “Based on the results, we recommend that MSPs continue to lead with solutions focused on security, data storage, and data analytics. Our most successful MSP partners are enabling their businesses to be more secure and to always access and analyze their data. The lifeblood of any business is its data, so it makes sense that securing it, backing it up, and analyzing it is most important to businesses.”


Supply chain attacks show why you should be wary of third-party providers

"Security is really only as good as the weakest link," says John Titmus, director of sales engineering EMEA at CrowdStrike, a security vendor. "Supply chain attacks are getting more widespread and growing in frequency and sophistication. You need to understand the nature of the risks and develop a security roadmap around it." Deep Root Analytics, a marketing firm used by the Republican National Committee, leaked the personal data of 200 million voters in 2017. This is a small company, that, according to its LinkedIn profile, has fewer than 50 employees. Deep Root Analytics accidentally put the data on a publicly accessible server. Larger service companies are also vulnerable. The Verizon breach, which involved six million customer records, was caused by Nice Systems, a provider of customer service analytics. Nice put six months of customer service call logs, which included account and personal information, on a public Amazon S3 storage server. Nice reports that it has 3,500 employees and provides services to more than 85% of Fortune 100 customers. Nice is tiny compared to Deloitte, an accounting firm with more than a quarter million employees.


How Apple’s pending search engine hints at a rise in enterprise search

For many years there’s been a gulf between the tools we use as consumers and the tools we use as knowledge workers. It’s often a source of frustration: why is it that at an internet search console, we can find the answer to the most obtuse and bizarre question we can possibly imagine in seconds through a couple of clicks of the mouse, yet it takes me ten minutes to find the document that Jack from Accounts sent me two weeks ago? The answer lies in the complexity of enterprise search as a function. Behind the veil of the easily accessible user interface, enterprise search is more complex than it appears and there are much greater technological challenges to be overcome, despite the visual similarities with internet search. For example, content online can easily be categorised by the number of clicks and views a page has received, in order for relevancy to be established, as traffic volumes are incredibly high. However, the document that Jack from Accounts sent is unlikely to have been opened anywhere near as much, so other technologies, such as natural language processing, need to be relied upon in order to understand the content of documents and recommend relevancy.



Quote for the day:

"Positive thinking will let you do everything better than negative thinking will." -- Zig Ziglar

Daily Tech Digest - December 16, 2020

5G and the sustainable future: a look to 2025

5G could boost economic resilience by providing a reliable second source of precise location and timing information, not only assisting global navigation satellite systems (GNSS) but complementing them. GNSS only work in “open sky” environments — they can’t provide information in “sheltered” settings such as tunnels or buildings. 5G delivers position and timing information that is just as accurate as GNSS but can do so while providing unbroken indoor and outdoor coverage. Broadband has become essential to our social lives and the world’s economies (both national and local), but there are still many “white spots” where no coverage exists. This digital divide must be addressed if more people are to access the economic opportunities and benefits that mobile connectivity provides. As future economic growth will depend less on basic connectivity and more on advanced service profiles, the harmful effects of these white spots will only become more acute. White spots in farmland hinder modern agricultural water management. An estimated 42 percent of the world’s accessible fresh water is lost through leaky irrigation systems, inefficient application methods and growing crops that are too “thirsty” for their environment.


SSO and MFA Are Only Half Your Identity Governance Strategy

The time employees spend waiting for access approval is paid time when they are not working. As mentioned, the time spent by IT staff entering or resetting passwords adds up. It's an unnecessary and costly allocation of resources. The crux of the problem is not only understanding which roles need access to which application assets but determining what is the right level of access. The faster this can be achieved with less human intervention, the greater the efficiency and cost-saving. New automated solutions that harness machine learning hold promise to help IT and security teams with smart recommendations about where to direct their efforts. Prioritization is essential when managing thousands or tens of thousands of identities. Lost in the sea of identities, it is easy for organizations to lose track of which permissions they have granted. This can lead to permission sprawl and unnecessary exposure. However, automated tracking of users, their roles, and the permissions granted to them can dramatically reduce the risk of unused entitlements that attackers can exploit to gain access to valuable assets. Permission management has a lot of catching up to reach the robustness and adoption of SSO-related tools.


Microservice Architecture and its 10 Most Important Design Patterns

Microservice Architecture is about splitting a large, complex systems vertically (per functional or business requirements) into smaller sub-systems which are processes (hence independently deployable) and these sub-systems communicates with each other via lightweight, language-agnostic network calls either synchronous (e.g. REST, gRPC) or asynchronous (via Messaging) way. ... In a Microservice Architecture, especially with Database per Microservice, the Microservices need to exchange data. For resilient, highly scalable, and fault-tolerant systems, they should communicate asynchronously by exchanging Events. In such a case, you may want to have Atomic operations, e.g., update the Database and send the message. If you have SQL databases and want to have distributed transactions for a high volume of data, you cannot use the two-phase locking (2PL) as it does not scale. If you use NoSQL Databases and want to have a distributed transaction, you cannot use 2PL as many NoSQL databases do not support two-phase locking. In such scenarios, use Event based Architecture with Event Sourcing. In traditional databases, the Business Entity with the current “state” is directly stored. In Event Sourcing, any state-changing event or other significant events are stored instead of the entities.


Researchers develop tool that automates device programming in the IoT

A new solution is now on the table, seeking to standardize the programming of event-driven architectures: the AsyncAPI specification. This specification allows users to define all the relevant information needed to design and run IoT devices in these environments. However, AsyncAPI is still in the early stages of development and therefore the tools that support it remain in short supply. Despite this, the researchers have developed a tool based on this new proposal that allows users to automate the creation of messages in the appropriate format, as well as the sending and receiving of these messages. Abel Gómez said: “Much of the work that goes into implementing a program for an IoT device involves creating messages in the format that subscribers to the channel expect and also “translating” messages from other devices in order to process the information. A large amount of code must therefore be programmed and, when done manually, this can be a source of errors.” The researcher continued: “By adopting this new tool, we can significantly shorten the amount of time needed to develop and launch programs, which favours interoperability, improves code quality and in turn limits the number of errors in the software development life cycle.


Covid-19 sparks boom in Middle East digital payments sector

“The biggest opportunities lie in getting everyone in the region into the financial system,” said Dannish, adding that the advent of open banking will allow application programming interfaces (APIs) to transform regional payment systems. “APIs enable innovation and allow businesses to perform in a better way,” he said. “There is so much growth still to be had in this market. The barriers to entry have been lowered since the pandemic.” However, Yusuf noted that fragmentation remains a major barrier to growth for the digital payments sector. “The region is split in terms of payment methods, policy and regulation, infrastructure and consumer preference,” he said. “There is also a generally fragmented landscape in terms of payments partners. Merchants often have to operate payments strategy at a granular level.” Yusuf added that cash as payment remains stubbornly popular in the Middle East. “Despite the momentum away from cash, if we look at the region compared to the rest of the world, it is still cash-centric and digital payments are only on the way to fulfilling their potential,” he said. “While Covid-19 and deep digital penetration are fostering change, our report showed that cash has deep-rooted social and cultural significance in the region, and that won’t be transformed overnight.”


Building stronger multicloud security: 3 key elements

The typical enterprise IT infrastructure and solutions stack today includes not only public and private cloud deployments but also an average of 288 different SaaS offerings, according to the 2020 SaaS Trends report from tech vendor Blissfully. (That’s in addition to legacy technologies in many cases, too.) These various elements have different security requirements as well as different levels and types of built-in security capabilities. Different cloud providers have different tools, they often use different terms for the same class of tools, and they have differing positions on their security responsibilities. All this leaves CISOs having to stitch together a cohesive whole that documents whether the cloud-provided security features are adequate, whether more security is needed, and where and what additional security measures are warranted. “Cloud was supposed to make our lives simpler, and it a lot of ways it does; it provides a lot of benefits. But from a security perspective it adds a lot of complexity because there’s so much to do,” says Garrett Bekker... Respondents to the 2020 Cloud Threat Report survey from Oracle and KPMG cited complexity as significant challenge, with 70% of respondents saying that too many specialized tools are required to secure their public cloud footprints and 78% highlighting the need for varying security policies and procedures between their cloud-resident and on-premises applications.


6 container concepts you need to understand

This architecture provides agility that is not feasible with VMs. Furthermore, containers support a more flexible model when it comes to compute and memory resources, and they allow resource-burst modes so that applications can consume more resources, when required, within the defined boundaries. In other words, containers provide scalability and flexibility that you cannot get from running an application on top of a VM. Containers make it easy to share and deploy applications on public or private clouds. More importantly, they provide consistency that helps operations and development teams reduce the complexity that comes with multi-platform deployment. Containers also enable a common set of building blocks that can be reused in any stage of development to recreate identical environments for development, testing, staging, and production, extending the concept of "write-once, deploy anywhere." Compared to virtualization, containers make it simpler to achieve flexibility, consistency, and the ability to deploy applications faster—the main principles of DevOps. Docker has become synonymous with containers. Docker revolutionized and popularized containers, even though the technology existed before Docker.


Concerns Run High as More Details of SolarWinds Hack Emerge

The security vendor has described UNC2452 as a threat actor that it has not encountered previously. FireEye has released indicators of compromise (IoCs) and signatures so organizations can detect the threat. But so far it has not publicly, at least, attributed the attack to any specific nation-state sponsor. Numerous media reports, however, have pinned the campaign on APT29, or Cozy Bear, a group thought to be associated with Russia's intelligence apparatus. Paul Prudhomme, cyber-threat intelligence analyst at IntSights, says his firm has so far not been able to corroborate or independently verify the claimed attribution to state-sponsored Russian cyber-espionage groups. "But we do nonetheless find the claim credible and worthy of further consideration," he says. The campaign is consistent with what IntSights has observed with state-sponsored Russian actors, including the targeting of the US government, the tight operational security, and the generally high level of sophistication and tradecraft involved. At the same time, "technology supply chain compromises of this kind are more typical of Chinese cyber-espionage groups than their Russian counterparts," Prudhomme says.


The state of AI in 2020: Biology and healthcare's AI moment

There's a ton of topics we could pick to dissect from Benaich and Hogarth's work, such as the use of PyTorch overtaking TensorFlow in research, the boom in federated learning, the analysis on talent and retainment per geography, progress (or lack thereof) in autonomous vehicles, AI chips, and AutoML. We encourage readers to dive into the report to learn more. But we wrap up with something different. Hogarth mentioned that the speculation phase in AI for biology and healthcare is starting, with lots of capital flowing. There are going to be some really amazing companies that come out of it, and we will start to see a real deployment phase kick in. But it's equally certain, he went on to add, there are going to be instances that will be revealed to be total frauds. So, what about AI ethics? Benaich and Hogarth cite work by pioneers in the field, touching upon issues such as commercial gender classification, unregulated police facial recognition, the ethics of algorithms, and regulating robots. For the most part, the report focuses on facial recognition. Facial recognition is widespread the world over and has lead to controversy, as well as wrongful arrests. More thoughtful approaches seem to gather steam, Benaich and Hogarth note.


Security automation: Time for a new playbook

Start by examining the processes and procedures your organization’s security team already has in place and identify the tasks that consume the majority of team member’s time. These will be the key use cases where SOAR can provide the most benefit by applying efficiency, speed and consistency. For example, in many organizations this might include processes such as looking up asset information or reviewing additional data points related to a security alert or a reported phishing email. It could be the process of pulling data on what’s running in memory on a device and adding that detail to an existing incident management ticket to assist in an investigative decision. Or it could be isolating hosts or blocking an IP range on the network in order to stop a threat from spreading. These are all common use cases that can be effectively automated, but only if the underlying processes and procedures are mature and well-defined. Different categories of automation require different levels of maturity in the underlying processes. If you plan to introduce any type of automated response – such as automated threat containment – you must be absolutely certain that the underlying processes are mature, or it could have a greater than intended impact the availability of systems and people.



Quote for the day:

"People with good intentions make promises, but people with good character keep them." -- Joubert Botha

Daily Tech Digest - December 15, 2020

Blockchain Vs Relational Database: What’s The Difference?

So, what is blockchain technology? Well, it’s a ledger system that is decentralized and distributed. More so, it also offers data integrity, transparency, and so on. In simple terms, blockchain would be connected in a chain-like format. It means that any data in the ledger will take on a chain-like structure. So, just imagine the structure of blocks that are interlinked together. Furthermore, a block will be linked to the previous and after blocks. As a result, all the blocks create a chain of blocks, thus the name. More so, every single block on the ledger will have data or information about the transaction. So, what about the security of those transactional data? Well, every single block will be cryptographically encrypted. Another cool thing about blockchain is that it will have a cryptographic Hash ID that no one can reverse engineer. You might think blockchain as a database that just stores information. However, the difference is immense. In reality, both of them are quite different, and we’ll get into that shortly in the blockchain vs relational database comparison. Blockchain is, by default, immutable. So, it means that no one can modify any form of data whatsoever. Thus, any information that gets into the system once can never be altered or deleted. As a result, it will stay in the ledger forever.


6 Cloud Native Do’s and Don’ts for Developers

It’s easy to get so caught up in the question of what technologies you’re using, that you forget why you’re using them in the first place. But remember that adopting cloud infrastructure — whether it’s a Kubernetes cluster in your own data center, or serverless API in the public cloud — isn’t the goal. The goal is to help your organization build more scalable and flexible applications and to do it quicker. If you’re not actually taking into account the advantages and disadvantages of cloud infrastructure when you build applications, there’s a good chance you’re not actually meeting your organization’s real goals. ... Nodes crash. Networks fail. Remote APIs give unexpected results. Cloud native development requires you to handle these problems gracefully. Applications need to give users some sort of response, even if a component, or several components, are broken or non-responsive. You also need to think about how to recover once the broken or unavailable component is working again. Check out the Reactive Principles for additional guidance and techniques for getting started. ... Cloud native applications have unique compliance and security challenges.


Security considerations for OTA software updates for IOT gateway devices

Security is a process and a mindset. There is no magic switch we can toggle to make a system secure. It is important to stay vigilant, reviewing existing security flaws, and adapting to your workflow to account for them. New classes of attacks appear seemingly every day and engineering teams must prepare for this in order to remain secure. The white hats have to get it right every time while the black hats only need to get it right once. You need to identify what resources are worthy of being protected. A database of weather readings is unlikely to contain proprietary information whereas a customer database most certainly is. You will want to tailor the security to match the severity of a breach. The objective of most security devices is to increase the cost of an attack or reduce the value of any successful breaches. It is important to realize that the OTA update system is generally only concerned with potential attacks and vulnerabilities to the update process itself. It does not provide any protection against attacks that happened outside of the update change. For these kinds of attacks, you need to rely on other components provided by your operating system. One extremely important general security consideration is the principle of least privilege.


Microsoft and the State of Quantum: Q&A With Mariia Mykhailova

The existing quantum hardware is just not mature enough to run quantum algorithms to solve real-world problems, both in terms of the number of qubits in the devices and their quality. However, quantum computing can have impact today – it just requires some extra creativity! We call these solutions “quantum-inspired algorithms” – algorithms that were developed with quantum processes in mind but run on classical hardware. ... Microsoft Quantum’s mission is to develop a scalable and open quantum system and ecosystem around it. This means that we’re working on building a full stack quantum system, and that stack has a lot of layers. Some of these get a lot of publicity, such as Microsoft Quantum Development Kit or the quantum hardware and the fundamental physics research required to implement our vision for it, the topological qubits. But there are other, less known but not less important layers of the stack between these two, such as qubit control technology that has to support scaling quantum systems to millions of qubits, way beyond the physical limitations of current systems. That being said, solving world’s intractable problems is certainly not a single-company effort!


An Introduction to Blockchain + NoSQL Databases

Despite the benefits, distributed computing is not pervasive; even within modern enterprises centralization of many systems is still quite common. This includes industries that you would expect to be designed with more resiliency in mind, like the global financial systems or supply chain management which have tended to be more centralized around mainframe computing. By the way, you can always tell when there is a centralized system because when it fails, it fails absolutely! When all data or services are running on a single machine it is quite easy to know when it goes down because everything completely stops. It may be because it takes time to start up a replacement machine, or takes time to notice a failure before re-routing users or a myriad of other devastating engineering reasons. A centralized system is the opposite of the peer-to-peer networks we aspire to. However, with the introduction of platforms like Bitcoin, the next generation of digital currency and “ledgers” are slowly being proven out. Now there are thousands of different cryptocurrencies and dozens of Blockchain backends that are taking advantage of decentralized technology. As an aside, note that “distributed ledger” does not equate to the proof-of-work scenarios that many cryptocurrencies use.


Ethical design thinking: empowering designers to drive ethical change

Designers have started to recognise that some of what they have created is harming people. They are now starting to look at the use of technology and its impact in the long term, with ethical design at the centre of their thinking. Despite their motivation, companies have accepted that AI bias exists and are changing how they harvest and use people’s data — and designers are central to this change in strategy. “The core is really around pivoting from what can be done, with the designer coming in at a later stage, to thinking about what should be done, with the designer coming in at the beginning of the process,” says Woodley. “The designer represents the human. They create what is consumed by the person and so they should be ones influencing the line between what the business wants, what is possible from a technology perspective and what is responsible from an ethical perspective,” she continues. Design thinking, starting with empathy or the understanding of the human, needs to be at the forefront of future technology innovations and services. We need to flip the current model. Instead of leveraging technology to achieve business goals without taking the human impact into consideration, we need to put the human at the centre of our technology endeavours.


What’s at stake in the Computer Fraud and Abuse Act (CFAA)

Intended as the United States’ first anti-hacking law, the CFAA was enacted almost thirty-five years ago, long before lawyers and technologists had any sense of how the Internet would proliferate and evolve. In fact, the Act is outdated enough that it specifically excludes typewriters and portable hand-held calculators as a type of computer. Since its inception, it has been robustly applied for basic terms and services breaches, like the infamous case of Aaron Swartz downloading articles from the digital library JSTOR, to indicting nation-state hackers and extraditing Julian Assange. The core of the problem lies in the vague, perhaps even draconian, description of “unauthorized” computer use. While the law has been amended several times, including to clarify the definition of a protected computer, the ambiguity of unauthorized access puts the average consumer at risk of breaking federal law. According to the Ninth Circuit, you could potentially be committing a felony by sharing subscription passwords. The stakes are particularly high for security researchers who identify vulnerabilities for companies without safe harbor or bug bounty programs. White-hat hackers, who act in good faith to report vulnerabilities to a company before it is breached, face the same legal risks as cybercriminals who actively exploit and profit from those vulnerabilities.


Take any open source project — its contributors cut across national, religious and racial lines

“Open source is not all technical, and there is a strong community angle to this. During my college days, I’ve been involved in small ways with local user groups, where I used to conduct classes and tutorials on various topics. Once I moved to Bengaluru to work, I got heavily involved in the Python community and organised the first Python conference in India in 2009. PyCon India was probably one of the first language-specific tech conferences in India, and it has since then grown to be one of the largest PyCons in the world. This year, due to the coronavirus situation, we’re conducting the conference online. I’m also an active contributor to the Stack Overflow website, where I rank among the top 0.29 per cent worldwide for giving answers to questions.” Ibrahim feels that a lot of people don’t seem to realise that contributing something significant to a project requires a large amount of work. The main challenge is to develop patience and dedication to spend enough time to understand a project so that one can contribute to it. There are smaller problems, like some projects do not have enough contributors to help with technical problems, but overall, the main problem is the lack of discipline to put in the time necessary to achieve some level of proficiency.


Hear the Music Behind Your Data

When faced with the troves of data piling up daily, companies can become quickly overwhelmed. They’re unsure of where to begin an analysis for connections between data points. Data science is about exploring and seeking patterns within data, so it plays a pivotal role in getting companies started with their analyses. Oftentimes, data scientists won’t even know the question before they explore; instead, they’ll use their technology to identify emerging trends and patterns. Capturing and interpreting those patterns can provide tremendous benefits to a company. For example, data can help you catch bots that sign up and then spam your product. Human interaction with a product produces certain patterns — behavior forms a shape. You can compare that behavior shape to potentially anomalous datasets and determine if a user is human or not. That gives your team confidence in disconnecting potential bots from your system, which can save a fair amount of server space and money. Music is all about patterns, too. Composing a musical piece requires understanding how notes and the spaces between them all fit together to create cohesive patterns. Every song you’ve ever heard has a particular waveform derived from unique patterns of notes and spaces.


The Private Sector Needs a Cybersecurity Transformation

Fundamentally, the current approach to security is focused on the past — even if it's just a few milliseconds ago. Identifying a threat that already occurred and stopping the next one is not protection. And with the advances in technology available today, it should not be the accepted protocol for our industry. When a time-consuming analysis results in the conclusion of "we can block this attack next time," you are nowhere close to secure. Simply put, this approach does nothing to account for the agile adversaries that we know exist. Staying agile in this fight means looking forward, not back. For that to be a reality however, time plays a crucial role. Research from Ponemon Institute shows that security teams spend at least 25% of their time chasing false positives. I'd argue it's even higher. Defense cannot continue to be about uncovering the threats that have already happened while trying to block them again. Time has to be spent on truly preventing what's coming next. ... While hygiene is important, there is very little prevention going on at the threat level. Well-meaning employees have been stretched so thin that they find post-event response acceptable and equate it to cybersecurity. Sometimes hygiene equates to patching, but often there is a good reason why you can't patch.



Quote for the day:

“The real voyage of discovery consists not in seeking new landscapes but in having new eyes.” -- Marcel Proust

Daily Tech Digest - December 14, 2020

SDM and the DevOps Delusion of Connected Organizations

Looking across the enterprise, DevOps cultures fray further. A perfect DevOps practice envisions teams in different departments, business units and geographies all in sync, delivering value via software on a reliable cadence. In reality, enterprise DevOps is difficult to pull off because most organizations haven’t implemented shared languages, comment processes and best practices across all of their teams with management buy-in. ... Adopting an SDM strategy on top of an existing DevOps culture can help an organization in a number of ways. Having disconnected tools, disconnected data and a lack of common language make it difficult for one side of the organization to know what the other wants. Even in mature DevOps implementations, software delivery ends up being a patchwork of different processes for different teams. It’s also difficult to determine if our teams are delivering the right end product if there’s no shared visibility and insight. SDM helps by establishing common data and common processes, giving visibility and insights across teams of different maturities, different tools and different technologies so you get all functions to collaborate. That way you can be sure you’re not just deploying more—you’re actually delivering continuous value.


Legal Framework Around Blockchain Technology Applications

Firstly, blockchain transactions have no borders and hence, jurisdiction is a vexed issue. Where and how a dispute can be raised remains an unanswered question. Blockchain technology depends upon public and private keys allotted to users and their interface with the hash function. Transactions conducted through blockchains have a sense of permanence. Though verification and authentication is the fulcrum of this technology, there is no means of enforcing a transaction in a court of law. Mistakes cannot be corrected and hence, the jurisprudence under the Indian Contract Act, 1847 relating to mistake of fact or law is wholly alien to these transactions. How is one to be held to a promise made in an entry in a ledger? When does the promisor and the promisee relation get established? These are all questions for which answers would have to be found. What happens if a private key is misused or tampered with? Are there any legal remedies? One doesn't know. There is no central authority which monitors BT, however, recent adaptations of BT are being used by governmental authorities for verifying and authenticating ownership of moveable/immovable assets. What are the responsibilities of such an authority?


The year ahead in DevOps and agile: still some missing pieces to plug in

A big obstacle preventing the success of DevOps is quality control, Jainendra points out. "Teams are now able to move more quickly. However, error rates are not decreasing. For now, this can be more easily managed since so many of the changes are smaller and easier to revert. But as DevOps scales throughout the enterprise, this will add a layer of complexity to the process. Many organizations are still seeing quality issues in both their program and infrastructure code; maintaining low error rates is key for creating a successful DevOps practice by helping streamline larger releases." There has been more progress with agile, Jainendra believes. "There's been an increased interest in scaling agile methods by implementing broader agile management such as Scaled Agile Framework," he says. "Additionally, teams have benefited greatly from agile methodology and are seeing productivity boosts as a result of its ability to create a culture of efficiency. Even as everyone began to work remotely, we are still experiencing the same team collaboration benefits as we were before Covid-19."  What lies ahead, he states, is a need for greater customization of agile efforts. "Organizations often fail to tailor their agile initiatives to their organizations. ..."


The Importance of Technology in the Strategic Design of Workplaces in the COVID-19 Era

One of the principal problems of the shift to working virtually is the feeling of personal disconnect. Working from home can lead to isolation. For companies, this means that successfully transmitting their mission and values can be a difficult task. The problem is exacerbated when you consider that corporate culture currently drives office design. Co-creation platforms allow us to carry out interactive activities with employees and receive feedback in real-time. We use gamification techniques to increase engagement and ensure employees take ownership of the office design project. These platforms and activities give us an understanding of the company culture; the unwritten rules that define the behavior of people in a group. This critical information helps companies build a more flexible culture that encourages continuous learning and lets interior design strategists design an office that transmits and reinforces its values. ... Designing, and above all, explaining online projects can be complex. We often find that it’s difficult for customers to understand detailed plans and schematics from behind a screen. With the integration of the BIM (Building Information Modeling) methodology and virtual reality technologies, designers can offer companies virtual tours of their future workspaces.


Are you ready for multicloud? A checklist

IT leaders know the complexities of setting up secure and robust cloud infrastructures. Naturally, these complexities multiply when you combine multiple clouds. You should strive to avoid dealing with them all at once. Operating across multiple clouds is complex because of the required governance, technical expertise, and integrations. As Sarbjeet Johal, an independent technology strategist, puts it, “Nobody gets up in the morning and says we are going to do multicloud today. They just fall into it, mainly due to organizational silos. Multicloud is as easy as 1-2-3... said no one ever!” Joanne Friedman, Ph.D. and CEO of Connektedminds, suggests that IT teams leverage their primary cloud provider wherever possible, rather than hunt for new or better capabilities in a second provider. ... Other IT leaders share pragmatic viewpoints on how multiclouds evolve and how to navigate initial complexities. Travis Campbell, a big data consultant, offers this insight into where the multicloud journey begins: Companies doing ‘multicloud’ but really treating it as a single cloud by each line of business are a special case here. For example, finance may have applications on cloud X, while engineering is deploying to cloud Y, and there’s no cross-pollination of work and data. It’s multicloud without hard problems.


Why JSON Isn't a Good Configuration Language

Compared to many other configuration languages, JSON is pretty noisy. There is a lot of punctuation that doesn't aid human readability, although it does make it easier to write implementations for machines. In particular, for configuration files, the keys in objects are almost always identifiers, so the quotation marks around the keys are redundant. Also, JSON requires curly braces around the entire document, which is part of what makes it an (almost) subset of JavaScript and helps delimit different objects when multiple objects are sent over a stream. But, for a configuration file, the outermost braces are just useless clutter. The commas between key-value pairs are also mostly unnecessary in config files. Generally, you will have a single key-value pair per line, so it would make sense to accept a newline as a delimiter. Speaking of commas, JSON doesn't accept trailing commas. If you need commas after each pair, it should at least accept trailing commas, since trailing commas make adding new entries to the end easier and lead to cleaner commit diffs. Another problem with JSON as a configuration format is it doesn't have any support for multi-line strings. 


A tech reboot of retail

Looking at the global figures, Michael O’Grady, principal forecast analyst at Forrester, says: “Retail categories like grocery and essential consumables are performing well, while other categories like fashion, beauty and cosmetics are seeing a marked decline in consumer spend.”Predictably, the coronavirus lockdowns have benefited e-commerce.  ... “It’s really important to understand what people are wearing,” says Capgemini’s Peplow. Image analysis on social media may be among the approaches fashion retailers will need to use to remain relevant to high street shoppers. Clearly, operating seamlessly across multiple channels will become a key line of defence for bricks-and-mortar retailers to compete with internet retailers. But while online shopping is convenient and may well offer an item at the cheapest price, in-store shopping is more than just transactional. There is strong evidence that high street retailing needs to become more experiential, such as the way Natuzzi is using mixed reality in its showroom. And while the idea of stores “3D printing” products may seem far-fetched, Decathlon’s use of the technology shows there are niches where it works extremely well.


Startups Should Do Things That Don't Scale, but Security Isn't One of Them

Startups are especially attractive targets to hackers due to a combination of limited resources and the proliferation of business models that revolve around collecting customer data. In fact, research shows over 67% of companies with under 1,000 workers have experienced a cyberattack, and 59% were successfully breached. Investing in scalable security is a startup's best hope at defending against an attack that statistics say it should expect. Lack of scalability in security detracts from efficiency and opens gaps in a startup's networks. It forces IT to preoccupy itself with the endless application of security to new resources and users rather than with optimizing or monitoring. In these cases, companies are often too busy working in the trenches to notice they've been hacked until it's far too late. It's not all bad news, however. Security is no longer a zero-sum game. It has been commoditized into various products in recent years, allowing young companies to balance its risks and rewards by scaling in pieces that won't become obsolete or demand too much attention from IT. Security processes like encryption, firewalls, and authorized access once required hardware and lots of work to operate at scale. 


Hacked: US Commerce and Treasury Departments

Reuters first reported the incidents, with The Washington Post suggesting that a Russian hacking group known as Cozy Bear, aka APT29, is the source. The Post reported last week that the same group was behind an attack against cybersecurity firm FireEye. In an update late Sunday, FireEye warned that starting around March and continuing through May, software updates for SolarWinds' Orion product had been subverted with backdoors, which it has dubbed "Sunburst." The malicious software updates were signed using valid digital signatures, and could steal files, profile systems and disable system services, it says. FireEye warns that "the actors behind this campaign gained access to numerous public and private organizations around the world." "We can confirm there has been a breach in one of our bureaus," the Commerce Department says in a statement. "We have asked CISA and the FBI to investigate, and we cannot comment further at this time." The U.S. Cybersecurity and Infrastructure Agency, or CISA, on Sunday issued an emergency directive "in response to a known or reasonably suspected information security threat," noting that the affected Orion products are versions are 2019.4 through 2020.2.1 HF1.


Moving from Agile Teams towards an Agile Organization

It is not unusual to see different levels across the organization of a large program talking in different terms. This is normal and expected. The level of detail that senior management has to deal with cannot be the same as that of the engineers modifying the cloud infrastructure of the deployment pipeline. However, when talking about progress, if we have some talking about epics, others about features, implementation teams discussing stories, and others working on tasks, it shouldn’t be surprising that nobody really knows what the actual progress is. In the same way that it is important to agree on the moment in which progress is measured (e.g. when deploying in production, or when there is a certain change in some indicator), it is also important to agree on the unit of measurement to be used. Then we have a common unit of progress that is atomic. And by making this progress clearly visible, we raise the overall awareness. This results in a velocity increase that can be potentiated by other elements of the context. From a systemic point of view, everything matters. Not only are all elements important, but the relationships between them are especially meaningful. It is like those images we used to draw when we were kids, connecting the dots to see the figure hidden in between.



Quote for the day:

"You can discover what your enemy fears most by observing the means they use to frighten you." -- Eric Hoffer

Daily Tech Digest - December 13, 2020

Different Normalization Layers in Deep Learning

Deep learning certainly involves training carefully designed deep neural networks and various design decisions impact the training regime of these deep networks. Some of these design decisions include Type of network layer to use such as convolution layer, linear layer, recurrent layer, etc. in the network, and how many layers deep should our network be? What kind of normalization layer we should use if at all? What should be the correct loss function to optimize for? Majorly these design decisions depend upon the underlying task we are trying to solve and require a deeper understanding of the different options we have at hand. In this post, I will focus on the second point “different Normalization Layers in Deep Learning”. Broadly I would cover the following methods. ... One important thing to note is, in practice the normalization layers are used in between the Linear/Conv/RNN layer and the ReLU non-linearity(or hyperbolic tangent etc) so that when the activations reach the Non-linear activation function, the activations are equally centered around zero. This would potentially avoid the dead neurons which never get activated due to wrong random initialization and hence can improve training.


Financial Fraud Through the Lens of Cybersecurity

Certain web sites require the user to enter a security question and an answer for it. The list of questions is standard, and one of them usually is – “What is your mother’s maiden name?”. This form of knowledge-based authentication is one of the most important aspects of conducting successful transactions online for high-value products, as most banks ask this as a security question for making any changes to the account. ... Credit card dumps are used by fraudsters to capture valuable card data such as the card number and expiration date. These can be obtained in a number of ways. The most popular method nowadays is the “skimming”, a process in which an illegal card reader is used to copy the data from a Credit Card. Other methods include hacking into a retailer’s network or when unknown to the retailer, a malware-infected point-of-sale device sends information to cybercriminals.... Bank Identification Number is the first six numbers that appear on a Credit Card, and it uniquely identifies the institution issuing the card. The BIN is key in the process of matching transactions to the issuer of the charge card. This numbering system also applies to charge cards, gift cards, prepaid cards and even electronic benefit cards.


New ransomware campaign exploits weak MySQL credentials to lock thousands of databases

Interestingly, the operators do not appear to be utilizing any actual ransomware payload in their attacks. It begins by brute forcing weak password protocols for MySQL databases, followed by collection of data on existing tables and users before installing a hidden backdoor on the way out to facilitate future break-ins. “By the end of execution, the victim’s data is gone – it’s archived in a zipped file which is sent to the attackers’ servers and then deleted from the database,” write authors Ophir Harpaz and Omri Marom. Guardicore Labs also spotted two distinct versions of this campaign. The first, between January and November 2020, composed roughly two-thirds of observed attacks and involved leaving a ransom note with a Bitcoin wallet address, a ransom demand, an email address for technical support and a 10-day deadline for payment. However, in leaving those breadcrumbs, the operators made it possible for researchers to poke around their Bitcoin wallet and examine how much money had been transferred to it. Ultimately, they traced nearly $25,000 in payments from four separate IP addresses.


Using OPA for multicloud policy and process portability

Multicloud also presents a second alluring possibility, an extension of that original cloud-native logic: the ability to abstract cloud computing architectures so they can port automatically and seamlessly (if not just quickly) between cloud providers to maximize performance, availability, and cost savings—or at least maintain uptime if one cloud vendor happens to goes down. Cloud-agnostic platforms like Kubernetes, which run the same in any environment—whether that’s AWS, GCP, Azure, private cloud, or wherever—offer a tantalizing glimpse of how companies could achieve this kind of multicloud portability. But while elegant in theory, multicloud portability is complicated in practice. Dependencies like vendor-specific features, APIs, and difficult-to-port data lakes make true application and workload portability a complicated journey. In practice, multicloud portability only really works—and works well—when organizations achieve consistency across cloud environments. For that, businesses need a level of policy abstraction that works across said vendors, clouds, APIs, and so on—enabling them to easily port skills, people, and processes across the cloud-native business. While individual applications may not always port seamlessly between clouds, the organization’s overall approach should.


How COVID-19 impacted the relationship between the enterprise and its employees

Most organizations understood the innate value of employee well-being, which is defined in the report by a five-point system of indicators, all of these at work: Feeling calm, feeling energized, rarely feeling overwhelmed by responsibilities, feeling positive about yourself, and having trusting relationships Employee well-being remains critical as organizations continue to recover from this time of disruption, said Lauren Rice, XM scientist, Qualtrics, also in the report. "Whether it's providing flexibility to employees as they juggle work and personal responsibilities, supporting employees as they attend to any family health concerns, or just taking the time to listen to employees' concerns, it's a necessity at this time for organizations to care and support employees' well-being. When organizations care for their employees, the employees will in turn show care and dedication to their work and the organization." ... There's a great disparity between employers and their staff, according to the report, regarding acting on feedback: 92% of employees believe it's important the company listen to feedback, but admit that only 7% of employees say their company does so.


Big data is just a crutch for unimaginative marketers to lean on

The first problem seems to be the quality and the processes used in data storage. We like to celebrate how much we have, but data normally comes from separate systems, measured in different ways, stored in different places. Most CTOs will embarrassingly admit they have too much data these days, not too little. They find the data dirty, contradictory and in systems that won’t mesh. If companies know that much about everything, why are we sent credit card offers for credit cards we already have? Why is my bank offering me cheap loans and pitching me funds to invest in at the same time? ... Big data has the temptation of making the complex seem simple. It may think I like certain movies, when in fact I like certain movies at certain times or in certain situations. The crap I watch after a beer on a plane ruins any algorithm’s chances, let alone an Airbnb guest using my TV. We endlessly refer to “data-driven” insights when I’ve only ever seen facts, not insights, from data. Insights are found by observation and, where needed, supported by data. No spreadsheet ever revealed anything as beautiful and transformational as an insight.


Remote working is starting to show tears. But going back to office isn’t only solution

The remote work experiment seemed to offer an initial boost in productivity. But sustaining such productivity has been difficult, in part because the home wasn’t designed for work and the consequences of “Zoom” fatigue are real. Indeed, emerging evidence suggests burnout is plaguing remote workers across the board. Yet managing employee burnout is particularly difficult during a pandemic, when people are asked to mostly isolate at home, away from colleagues whose mere presence can often ease work-related stress. ... What’s worse, corporate policies meant to monitor and control employee behavior – whether while they work remotely or as means to make the office safer – risk eroding worker trust and undermining cultural norms. And the impact of these policies will likely endure long after the crisis subsides ... A third major cost of this sustained remote period of work is the lack of collaboration and its disruptive impact on innovation. Sure, some collaborations and idea generation can take place via Zoom meetings, but innovation still largely happens in physical spaces: at lab benches, alongside a 3D printer or in unintended office interactions that spark interdisciplinary collaborations.


Effective Use of Blockchain in the Legal Sector – A near future or a distant possibility?

Times changed and so did the use of technology in the legal sector. Although, there is no doubt that technology in the legal sector has improved efficiency, reduced errors and has further demystified the operations of the court system, but the growth has been slow. Perhaps, because the traditional legal system is a relic of the past, which while adjusting itself to the modern world technology is still, metaphysically, attached to its ancient and archaic roots. The present-day legal system, which has been slow in keeping up with the changes in technology, has for the very first time attempted to digitise itself overnight during these unprecedented pandemic times. Virtual court hearings and paperless filings, even with certain limitations, has been welcomed by the legal sector. The credit for the digitisation of courts should be given to the Supreme Court’s e-committee headed by Hon’ble Justice DY Chandrachud. The e-committee by swiftly implementing a contingency plan has enabled our courts to continue operating even during this pandemic thus helping thousands to get justice. However, the growth should not stop here and this opportunity should also be fully utilised to explore other technological innovations which can be imbibed and integrated into the prevalent legal tech.


Researchers warn of security vulnerabilities in these widely used point-of-sale terminals

Attackers could gain access to the devices to manipulate them in one of two ways. Either they're able to physically gain access to the PoS terminal, or they're able to remotely gain access via the internet and then execute arbitrary code, buffer overflows and other common techniques which can provide attackers with an escalation of privileges and the ability to control the device – and see and steal the data that goes through it. Remote access is possible if an attacker to gains access to the network via phishing or another attack and then move freely around the network to the PoS terminal. Ultimately, the PoS machine is a computer and if it's connected to the network and the internet, then attackers can attempt to gain access to and manipulate it like any other insecure machine. The way the PoS terminal communicates with the rest of the network means attackers could access unencrypted data card data including Track2 and PIN information, providing all the necessary information required to steal and clone payment cards. In order to protect against attacks exploiting PoS vulnerabilities, it's recommended that retailers using the devices ensure they're patched and up to date and they should avoid using default passwords where possible.


The Right Data Governance for Your Organization’s Culture

The first step is to observe. Places to look for indications of core culture are in the mission statement, in vision documents, and posters put up by the Human Resources department. Since culture is driven at the leadership level, observe what leadership values and rewards. In a more formal process, surveys of staff and leadership, past and present, can provide knowledge useful for determining the company culture. Recognize that culture clash is a possibility with mergers and acquisitions. In those situations, “Make sure that you have those conversations at your executive level because you can’t really drive that through your Data Governance programs,” Levins said. ... Although most companies have a predominant core culture, and there are often subcultures within that culture, Levins said, but for the purpose of the presentation, they would be focusing on the core culture for the organization as a whole. Each culture has strengths and pitfalls. Elder noted, “The things that we love about our partner also drive us crazy sometimes.” ... Cultivative culture is focused on people and possibility, said Elder. “How can we make the future better? How can we make people better?” 



Quote for the day:

"Added pressure and responsibility should not change one's leadership style, it should merely expose that which already exists." -- Mark W. Boyer