Daily Tech Digest - April 08, 2021

5 Reasons Why I Left the AI Industry

For decades, AGI has been the main goal driving AI forward. The world will change in unimaginable ways when we create AGI. Or should I say if? How close are we to creating human-level intelligent machines? Some argue that it’ll happen within decades. Many expect to see AGI within our lifetimes. And then there are the skeptics. Hubert Dreyfus, one of the leading critics, says that “computers, who have no body, no childhood and no cultural practice, could not acquire intelligence at all.” For now, it seems that research in AI isn’t even going in the right direction to achieve AGI. Yann LeCun, Geoffrey Hinton, and Yoshua Bengio, winners of the Turing Award — the Nobel Price of AI — in 2018, say we need to imbue these systems with common sense and we’re not close to that yet. They say machines need to learn without labels, as kids do, using self-supervised learning (also called unsupervised learning). That’d be the first step. However, there’s too much we don’t understand about the brain yet to try and build AGI. Some say we don’t need to create conscious machines to equal human intelligence.


The leap of a Cycldek-related threat actor

In the nebula of Chinese-speaking threat actors, it is quite common to see tools and methodologies being shared. One such example of this is the infamous “DLL side-loading triad”: a legitimate executable, a malicious DLL to be sideloaded by it, and an encoded payload, generally dropped from a self-extracting archive. Initially considered to be the signature of LuckyMouse, we observed other groups starting to use similar “triads” such as HoneyMyte. While it implies that it is not possible to attribute attacks based on this technique alone, it also follows that efficient detection of such triads reveals more and more malicious activity. ... Taking a step back from the FoundCore malware family, we looked into the various victims we were able to identify to try to gather information about the infection process. In the vast majority of the incidents we discovered, it turned out that FoundCore executions were preceded by the opening of a malicious RTF documents downloaded from static.phongay[.]com. They all were generated using RoyalRoad and attempt to exploit CVE-2018-0802.


Your Top .NET Microservices Questions Answered

Autonomy for teams to work with their microservices is a crucial benefit of architecting cloud-native apps. It is preferred to use independent database instances to give the teams the flexibility to roll out updates, security patches, bug fixes in production without breaking other microservices. Cloud-Native app architecture takes inspiration from the famous 12-factor app methodologies. One factor, “Backing Services,” states that the Ancillary resources like the data stores, caches, message brokers should be exposed via an addressable URL. Cloud providers offer a rich assortment of managed backing services. Instead of owning and maintaining the database yourself, we recommend checking out the available database options in the cloud. ... Monolithic apps can talk with microservices if their endpoints are reachable within the infrastructure or securely using a public endpoint. Microservices and their data can either be consumed synchronously via their endpoints or asynchronously through messaging like the Event Bus. As part of modernizing techniques, we recommend the strangler pattern, which helps in incrementally migrating a legacy system.


The First Time Jessica Alba Pitched Her Now-Unicorn Startup, It Failed. Here's How She Pivoted

Failure is part of every entrepreneur’s journey. When you care deeply about an idea, it can feel hard when you encounter people who don’t share or see your vision. Here are a few tips to stay the course when things aren’t going your way at first. ... Alba recruited friends at every step of the way who served as her sounding board. These people didn’t baby her and give her false hope; they asked the hard questions that exposed each and every possible weakness. Rely on trusted friends and confidantes to give you tough love, and your pitch will come off stronger to those who will have the final say. ... At first, everyone told Alba she should start with one product, then expand once that was successful. But this didn’t gel with Alba’s vision of a complete line of baby-safe products; the founder knew parents who wanted clean products wanted a brand that could provide multiple solutions. Ultimately, Alba ignored the conventional advice and launched with 17 products, which many people believed was too many. But because she didn’t compromise on that, either to venture capitalists or herself, the launch was a total success.


6 Best Practices for Remote Work by Agile Software Development Teams

The sudden shift to remote working was unexpected, but it was surprisingly well implemented in most cases. After months of remote working, let’s look at the progress being made by remote development teams. A recently published report on 50 remote agile development teams showed mixed results: 92% of teams are writing more code by an average 10%, which sounds good. Unfortunately, 63% of teams are releasing less frequently, with the total number of releases down by a worrying 21%; On top of this, the average release size is up by 64%, increasing risk and time to value. So before the COVID-19 pandemic, we had frequent, small releases and were very agile. Now we have infrequent, high-risk, large releases. This is not the ideal situation for agile, newly remote teams. ... First, review your remote team situation. Because we have lost the benefits of colocation, where constant interaction, easy pairing and water cooler conversations aid teamwork, we need to address collaboration in other ways. ... Remote working is a skill that requires time and effort to develop. Video conferencing is a great way to engage with your team. 


To cool datacenter servers, Microsoft turns to boiling liquid

Microsoft investigated liquid immersion as a cooling solution for high-performance computing applications such as AI. Among other things, the investigation revealed that two-phase immersion cooling reduced power consumption for any given server by 5% to 15%. The findings motivated the Microsoft team to work with Wiwynn, a datacenter IT system manufacturer and designer, to develop a two-phase immersion cooling solution. The first solution is now running at Microsoft’s datacenter in Quincy. That couch-shaped tank is filled with an engineered fluid from 3M. 3M’s liquid cooling fluids have dielectric properties that make them effective insulators, allowing the servers to operate normally while fully immersed in the fluid. This shift to two-phase liquid immersion cooling enables increased flexibility for the efficient management of cloud resources, according to Marcus Fontoura, a technical fellow and corporate vice president at Microsoft who is the chief architect of Azure compute. For example, software that manages cloud resources can allocate sudden spikes in datacenter compute demand to the servers in the liquid cooled tanks.


Generalists Vs. Tech Leaders : AI Adoption At Any Stage

When asked to identify intended users for their AI tools and technologies, over half of respondents identified clinicians as target users with healthcare providers as a close second. This is a big leap from AI being used primarily by data scientists and IT professionals, as was common in years past. This trickle-down effect of users persists even further when you consider the customers of mature organizations’ AI tools. ... As advances and applications of AI technologies grow, so do their intended user bases, so it’s important for all organizations to consider who they’re tailoring usability to. A patient who is interacting with a chatbot to schedule an appointment is a lot different than a radiologist using NLP to analyze the results of an X-Ray—and those are considerations that need to be evaluated when imagining the user experience. All organizations should be taking this into account, whether they’ve been deploying solutions for years now or are just getting started. As AI becomes more commercialized, newer players will take the lead from more mature companies that have had to evolve their customer base over the years.


Email overload? These new 'right to disconnect' rules could be the answer

Employees in Ireland are already protected by a number of labor laws. For example, they are not allowed to work more than 48 hours per week on average, except in very limited circumstances. The right to disconnect established in the new code, however, does not constitute a legal obligation: although the code's recommendations will be admitted as evidence in a court proceeding, failure to abide by the rules will not constitute an offence. Rather, the code of practice should be seen as a guide for both employers and employees, to come up together with appropriate working arrangements. This does not mean that all employees should start inflexibly working a nine-to-five schedule. The code of practice encourages employers to develop a "Right to Disconnect Policy" that informs workers of the normal working hours that will be reasonably expected from them, but also makes room for the occasional emergency that requires contacting staff outside of their workday, for example to fill in at short notice for a sick colleague. Any new policy should also acknowledge that some roles come with unconventional hours, such as those working across different time zones or requiring international travel. 


The best of both worlds: Making the most of your Hybrid IT strategy

The move towards greater use of the cloud has followed growing concerns on the management and protection of data. Cyber threats are continuing to evolve and accelerate, and the skills required to defend against are becoming more complex. Regulations such as the GDPR bring additional rights and safeguards for individuals, but the move towards cloud IT could expose a compliance gap – especially for organisations that handle personal data. Organisations that host their data on-premise in local storage systems should be in a position to identify the location of most, hopefully all, of their data, quite quickly and those that host data elsewhere could have concerns over not knowing where the data is stored. However, one of the challenges with public cloud adoption are the skills required to build and maintain it. Do organisations have the skills to ensure that data that is stored on-premise is secure and compliant? For many organisations, meeting compliance and regulatory requirements can be easier to achieve using private clouds. Just because organisations have outsourced their data storage, it doesn’t mean they can outsource responsibility for compliance, however.


Handcuffs Over AI: Solving Security Challenges With Law Enforcement

In cities like Chicago, the citizens of crime-ravaged communities fear the criminals more than they trust the police. The relationships between these communities and law enforcement are so strained that citizens do not provide evidence or testimony that will be used to successfully prosecute the criminals and guarantee deterrence. The same outcome, born of different history, creates a lack of coordination between law enforcement and private organizations being targeted by cybercriminals. The logs and data in systems owned and maintained by these organizations contain critical information that would enable successful prosecution of cybercrime to become the norm, which would deliver deterrence. Building SecOps on the incorrect outcomes of service and data availability have left the craft unprepared to align with law enforcement outcomes. The tools, workflows, and data provide little value to investigators and prosecutors. When an organization does report a crime to law enforcement, the responding agency must comb through a mess of disparate data locations and formats that is more complicated to process than a murder crime scene.



Quote for the day:

"Even the most honest human in authority often does not have the power to undo the damages that bad people do" -- Auliq Ice

Daily Tech Digest - April 07, 2021

How the recent pandemic has driven digital transformation in a borderless enterprise

Talking about the biggest innovations in the last year, Harishankar says AIML, “Data Science and digital core transformation are big areas for most companies. The whole digital core transformation is a big agenda and a lot of that is being run out of India, we are working with other centres as well but we have both existing talent, a lot of new hires with expertise in this area particularly around digital core transformation. Therefore I would say on the front end, commercial transformation, digital core transformation as well as Data Science, AIML areas, there is a lot that has been happening in the centre. In the new digital way of working it is very important to position your centre in that manner. We are leading innovation and not just part of it. We are equal partners in innovation across any centres in the world.” Talking about technologies that can be deployed or exploited from Indian centres, Bannerjee says once you start to enhance your digital adoption effectively, your store becomes your phone or your PC. You basically have the engineering capabilities to build your front end channels, your ability to quickly access the throughput.


MLOps Best Practices for Data Scientists

Today most ML journeys to get a machine learning model into production look something like this. As a data scientist, it starts with an ML use case and a business objective. With the use case at hand, we start gathering and exploring the data that seems relevant from different data sources to understand and assess their quality. ... Once we get a sense of our data, we start crafting and engineering some features we deem interesting for our problem. We then get into the modeling stage and begin tackling some experiments. At this phase, we are manually executing the different experimental steps regularly. For each experiment, we would be doing some data preparation, some feature engineering, and testing. Then we do some model training and hyperparameter tuning on any models or model architectures that we consider particularly promising. Last but not least, we would be evaluating all of the generated models, testing them against a holdout dataset, evaluating the different metrics, looking at performance, and comparing those models with one another to see which one works best or which one yields the highest evaluation metric.


How Uber’s Michelangelo Contributed To The ML World

The motivation to build Michelangelo came when the team started finding it excessively difficult to develop and deploy machine learning models at scale. Before Michelangelo, the engineering teams relied mainly on creating separate predictive models or one-off bespoke systems. But such short term solutions were limited in many aspects. Michelangelo is an end-to-end system that standardises workflows and tools across teams to build and operate machine learning models at scale easily. It has now emerged as the de-facto system for machine learning for Uber engineers and data scientists, with several teams leveraging it to build and deploy models. Michelangelo is built on open-source components such as HDFS, XGBoost, Tensorflow, Cassandra, MLLib, Samza, and Spark. It uses Uber’s data and the compute infrastructure to provide a data lake that stores Uber’s transactional and logged data; Kafka brokers for aggregating logged messages; a Samza streaming compute engine; managed Cassandra clusters; and in-house service provisioning and deployment tools. ... The platform consists of a data lake that is accessed during training and inferencing. 


Review: Group-IB Threat Hunting Framework

Group-IB’s Threat Hunting Framework (THF) is a solution that helps organizations identify their security blind spots and gives a holistic layer of protection to their most critical services both in IT and OT environments. The framework’s objective is to uncover unknown threats and adversaries by detecting anomalous activities and events and correlating them with Group-IB’s Threat Intelligence & Attribution system, which is capable of attributing cybersecurity incidents to specific adversaries. In other words, when you spot a suspicious domain/IP form in your network traffic, with a few clicks you can pivot and uncover what is behind this infrastructure, view historical evidence of previous malicious activities and available attribution information to help you broaden or quickly close your investigation. THF closely follows the incident response process by having a dedicated component for every step. There are two flavors of THF: the enterprise version, which is tailored for most business organizations that use a standard technology stack, and the industrial version, which is able to analyze industrial-grade protocols and protect industrial control system (ICS) devices and supervisory control and data acquisition (SCADA) systems.


How organisations can stay one step ahead of cybercriminals

To get ahead of the hackers, IT teams must be wary of unusual password activity, files being created and deleted quickly, inconsistencies in email usage, and data moving around in unexpected ways. One form of cyberattack is through hackers accessing software patch code and adding malicious code to the patch before it is delivered to customers as a routine update. This method of attack is especially devious because updates and patches are routine maintenance tasks, meaning IT teams are much less likely to be suspicious about them. Anti-malware solutions are also less likely to scrutinise incoming data like a patch from a trusted vendor. One key component that enables these types of attacks is credential compromise. Hackers are careful to obtain authentic credentials whenever possible in order to gain entry to the systems and data that they want to access inconspicuously, minimising their digital footprint. As a result, IT teams need to be wary of unusual password activity, such as an uptick in resets or permission change requests. ... Another powerful tool to reduce the risk of a cyber-attack is security awareness training. This can lower the chance of an incident such as a data breach by 70%. 


Testing Games is Not a Game

Games are getting more and more complex with the years. And gamers are a very demanding public. For those titles labeled as AAA (tripleA, high-budget projects) we are expected to deliver novel mechanics, mind blowing gameplay and exotic plot-twists. With each iteration, testing all of these becomes harder and the established ways of working need to be assessed and tweaked. That is quite hard, taking in consideration there are so many different kinds of games that it would be almost impossible to unify the way a game tester works. Committing to a general agreement on how to tackle testing processes, tools or even a job description with required skills is not feasible at all in the industry. From one game to another, from one game’s company to another, the required skills vary, the role changes. Also, due to the pretty common overuse of test cases and testing documentation, alive games grow exponentially and rather quickly into monsters. Game testers are usually forced to come up with better scoping techniques and risk/impact based testing. It opens up space for gaps where quality falls down with the consequent impact in gamers’ happiness.

Experian’s Identity GM Addresses Industry’s Post-COVID Challenges

"Today with so many bad actors focused on how to create automatic ways to fool systems into thinking they are legitimate, it's getting harder to validate that the business is transacting with a real person," Haller said. As a result, identity verification has gotten more sophisticated and better, too. For instance, it looks at IP addresses, device IDs, and GPS coordinates. Another field that is emerging is called behavior biometrics that captures data about how you interact with your keyboard and mouse and then uses that information about your behaviors to verify your identity, Haller said. "It is looking at how quickly you are typing, how you are using your phone, how you carry your phone," he said. "These are all behaviors associated with an identity. It might help determine whether someone has taken your device and is pretending to be you." To help IT security pros to tap into the most advanced technology for verifying identity and preventing fraud, Experian created CrossCore Orchestration Hub to connect the newest and most advanced services with customers. "We are trying to help our clients be more effective in discovering new risks and put new technology into production so they can protect themselves," Haller said.


Quantum computing just got its first developer certification.

"The focus right now is on preparing the workforce and skillsets so that businesses have an opportunity to leverage quantum computing in the future," Chirag Dekate, research lead for quantum at analysis firm Gartner, tells ZDNet. "But at the moment, it's a scattershot. One of the questions that always comes across from IT leaders is: 'How do I go about creating a quantum group?'" In many cases, they don't know where to start: according to Dekate, a certification like the one IBM unveiled will go a long way in pointing out to employers that a candidate has the ability to identify business-relevant problems and map them to the quantum space. Although adapted specifically for Qiskit, many of the competencies that are required to pass IBM's quantum developer certification exam are reflective of a wider understanding of quantum computing. Candidates will be quizzed on their ability to represent qubit states, on their knowledge of backends, or on how well they can plot data, plot a density matrix or plot a gate map with error rates; they will be required to know what stands behind the exotic-sounding but quantum-staple Block spheres, Pauli matrices and Bell states.


The importance of endpoint security in breaking the cyber kill chain

The term ‘kill chain’ was originally used as a military concept relating to structuring an attack into stages from identifying an adversary’s weaknesses to exploiting them. It consisted of target identification, forced dispatch to the target, decision, order to attack the target, and finally, destruction of the target. In simple terms it can be viewed as a stereotypical burglary, whereby the thief will perform reconnaissance on a building before trying to infiltrate and then go through several more steps before taking off with the valuables. ... For those defending systems and data, understanding the cyber kill chain can help identify the differing and varying defences you need in place. While attackers are constantly evolving their methods, their approach always consists of these general stages. The closer to the start of the cyber kill chain an attack can be stopped the better, so a good understanding of adversaries and their tools and tactics will help to build more effective defences. ... Endpoint protection (EPP) can detect and prevent many stages of the cyber kill chain, completely preventing most threats or allowing you to remediate the most sophisticated ones in later stages.


Interview With Karthik Kumar, Director Of Data Science For Auto Practise, Epsilon

As they say, “Data is the new code”. The machine learning code is only a small portion of the puzzle and would not suffice to take the model from a POC stage to production. Deployment is a process where it is a continuous data flow and learning journey, making ML an iterative process. Hence maintaining high quality in all phases of the ML life cycle is the most important task. The first step is to understand the business problem and to translate it into a statistical/machine learning problem. In this expedition, the quality of the data is critical and this is where a data scientist has to spend maximum of his efforts to better comprehend, and transform the data to understand its characteristics to build a robust machine solution leading to successful business outcomes. The amount of work on mining the right data, improving and understanding the data is the most important step which I would emphasise on my projects. An extensive feature engineering from the data would help build a strong data science model versus iterating the models on a fixed data set. My tip to budding data scientists would be to invest maximum time in gathering the right data, exploring and creating the features innovatively.



Quote for the day:

"Uncertainty is not an indication of poor leadership; it underscores the need for leadership." -- Andy Stanley

Daily Tech Digest - April 06, 2021

How Confidential Computing is dispelling the climate of distrust around cloud security

Confidential Computing offers a number of additional advantages that go beyond simple safeguarding. By ensuring that data is processed in a shielded environment it is possible to securely collaborate with partners without compromising IP or divulging proprietary information. ... Until now, many enterprises have held back from migrating some of their most sensitive applications to the cloud because of worries about data exposure. Confidential computing addresses this hurdle; not only is data protected during processing, companies can also securely and efficiently collaborate with partners in the cloud. For businesses migrating workloads into the cloud, a major concern is the ability to provide security for customers and continued compliance with EU data privacy regulations. This is especially the case where businesses are the stewards of sensitive data, such as healthcare information or bank account numbers. An important feature of Confidential Computing is its use of embedded encryption keys, which locks data in a secure enclave during processing. This keeps it concealed from the operating system as well as any privileged users i.e. administrators or site reliability engineers.


A Good Data Scientist Should Combine Domain-Specific Knowledge With Technical Competence

Technological expertise augmented by strong domain knowledge is important for an aspiring data scientist. One should have a clear understanding of the rules and practices of the industry before applying technological aspects to it. Be it automotive, BFSI, manufacturing or ecommerce, you can be a good data scientist in the field if you couple domain-specific knowledge with technical competence. Ideal candidates would have a degree or background knowledge of computer science or information technology. Data science is vast and may not suit everyone. Therefore, it is vital to have an aptitude to understand the data, see patterns, analyse from different perspectives and present findings to suit the end-user while also being open to understanding the domain. ... Industry partnerships are crucial to educational institutions. The two key components of a data science course are the fundamental conceptual foundation laid by highly qualified academicians and industry stalwarts with on-ground expertise and visibility. Both ensure that the key takeaways are beyond theoretical knowledge and include practical insights and understanding.


Can Digital Twins Help Modernize Electric Grids?

Digital twins could help guide decision-making as California completes its transition to 100% renewables, according to Parris, who points out that GE Digital is working with Southern California Edison, one of the state’s three largest investor-owned utility, to help model its operations. However, the mix of renewables in in the Golden State, not to mention Gov. Gavin Newsom’s ban on gasoline- and diesel-powered cars starting in 2035, will make it much harder to find a balance than in the Lone Star State. “It’s not just the heating [and cooling] of the buildings, but the cars,” Parris says. “It will be more distributed energy resources, like EVs [electric vehicles]. How do I bring them in? They add another complexity, because I don’t know when you’re going to charge your EV. I don’t know how much you’re going to use your car.” Backers of renewable energy are banking on large battery plants being able to handle short-term spikes in energy demand that have traditionally been handled by natural gas-powered “peaker” plants in California. But grid-scale battery technology is still unproven, and it also introduces more variables into the grid equation that will have to be accounted for. How long does that battery live [is] based on how often you charge and discharge it, so the life of the battery is a factor,” Parris says.


Stop Calling Everything AI, Machine-Learning Pioneer Says

Computers have not become intelligent per se, but they have provided capabilities that augment human intelligence, he writes. Moreover, they have excelled at low-level pattern-recognition capabilities that could be performed in principle by humans but at great cost. Machine learning–based systems are able to detect fraud in financial transactions at massive scale, for example, thereby catalyzing electronic commerce. They are essential in the modeling and control of supply chains in manufacturing and health care. They also help insurance agents, doctors, educators, and filmmakers. Despite such developments being referred to as “AI technology,” he writes, the underlying systems do not involve high-level reasoning or thought. The systems do not form the kinds of semantic representations and inferences that humans are capable of. They do not formulate and pursue long-term goals. “For the foreseeable future, computers will not be able to match humans in their ability to reason abstractly about real-world situations,” he writes. “We will need well-thought-out interactions of humans and computers to solve our most pressing problems. ...”


AI And HR Tech: Three Critical Questions Leaders Need To Support Diverse Teams

When dealing with HR AI tech, the limitations around diversity are the by-product of how solutions are designed. We are rapidly moving into space where solutions provide emotional recognition. AI analyzes facial expressions or body posture to determine decisions around recruitment. Current estimates expect emotion recognition is projected to be worth $25billion by 2023. Despite extraordinary growth in this area, there are challenges and significant kinks to be addressed, namely, ethical elements concerning the creation of the algorithms. Companies are grappling with HR AI and ethics. Recent examples demonstrate the enormity of the ramifications when things don't go according to plan. In other words, when things go wrong, they go badly wrong. Consider, for example, Uber, when fourteen couriers were fired due to a failure of recognition by facial identification software. In this case, the technology based on Microsoft's face-matching software has a track record of failing to identify darker-skinned faces, with 20.8 percent failure rate for darker-skinned female faces. The same technology has zero percent failure for white men.


How AI Can Solve The COBOL Challenge

Fortunately, using an old-school approach to AI and applying that to a different scope of the problem can save developers time in finding code by automating the process of precisely identifying the code that requires attention — regardless of how spread out it might be. Much like how contemporary AI tools cannot comprehend a book in a way a human does, human developers struggle to comprehend the intent of previous developers encoded in the software. By describing behaviors that need to change to AI tools, developers no longer have to labor searching through and understanding code to get to the specific lines implementing that behavior. Instead, developers can quickly and efficiently find potential bugs. Rather than dealing with a deluge of code and spending weeks searching for functionality, developers can collaborate with the AI tool to rapidly get to the code on which they need to work. This approach requires a different kind of AI: one that doesn’t focus on assisting the developer with syntax. Instead, AI that focuses on understanding the intent of the code is able to “reimagine” what computation represents into concepts, thereby doing what a developer does when they code — but at machine speed.


Secure API Design With OpenAPI Specification

API security is at the forefront of cybersecurity. Emerging trends and technologies like cloud-native applications, serverless, microservices, single-page applications, and mobile and IoT devices have led to the proliferation of APIs. Application components are no longer internal objects communicating with each other on a single machine within a single process — they are APIs talking to each other over a network. This significantly increases the attack surface. Moreover, by discovering and attacking back-end APIs, attackers can often bypass the front-end controls and directly access sensitive data and critical internal components. This has led to the proliferation of API attacks. Every week, there are new API vulnerabilities reported in the news. OWASP now has a separate list of top 10 vulnerabilities specifically for APIs. And Gartner estimates that by 2022, APIs are going to become the number one attack vector. Traditional web application firewalls (WAF) with their manually configured deny and allow rules are not able to determine which API call is legitimate and which one is an attack. For them, all calls are just GETs and POSTs with some JSON being exchanged.


Zero Trust creator talks about implementation, misconceptions, strategy

“The strategic concepts of Zero Trust have not changed since I created the original concept, through I have refined some of the terminologies,” he told Help Net Security. “I used to say that the first step in the five-step deployment model was to ‘Define Your Data.’ Now I say that the first step is to ‘Define Your Protect Surface.’ My idea of a protect surface centers on the understanding that the attack surface is massive and always growing and expanding, which makes dealing with it an unscalable problem. I have inverted the idea of an attack surface to create protect surfaces, which are orders of magnitude smaller and easily known.” Among the pitfalls that organizations that opt to implement a zero-trust model should try to avoid he singles out two: thinking that Zero Trust is binary (that either everything is Zero Trust or none of it is), and deploying products without a strategy. “Zero Trust is incremental. It is built out one protect surface at a time so that it is done in an iterative and non-disruptive manner,” he explained. He also advises starting with creating zero-trust networks for the least sensitive/critical protect surfaces first, and then slowly working one’s way towards implementing Zero Trust for the more and the most critical ones.


How can businesses gain the most value from their cloud investments?

Innovation can come from the smallest and simplest of places. And the chances are, the cloud can take your business there, whether it’s to be more productive or agile, more sustainable, or secure. The important thing is for this vision to be clear, well communicated, and considered in all tech investments, hires and processes. For example, if a business wants to make better use of data across its operations, technologies such as IoT, AI and robotics will be critical to gathering, deciphering, and actioning that data across the cloud. Businesses will also be hiring and developing the talent to operate these tools. And we know this isn’t easy. UK businesses are hungry for cloud computing skills and the talent pool is not as big as they would like. They will also be thinking about the platforms available that enable the entire organisation — not just the tech team — to partake in this culture of data-driven operations. On the other hand, perhaps a business wants their cloud investment to bring them cost savings — a key driver for many migrations. To do successfully, CIOs will need to think strategically about how they are leveraging the cloud’s pay as you go ‘as a service’ model, whether they are using technologies, such as cloud virtualisation, to be more efficient or unlock revenue opportunities.


NFT Thefts Reveal Security Risks in Coupling Private Keys & Digital Assets

Like other blockchain-based platforms, NFT marketplaces are targeted by hackers. The centralized design of the marketplaces and the high value attached to NFTs make them prized targets. They can be subject to a range of attack vectors, including phishing, insider threats, supply chain attacks, brute-force attacks against account credentials, ransomware, and even distributed denial-of-service attacks. Blockchain design encompassing NFTs provides certain fundamental properties applicable to security, such as immutability and integrity checks. Immutability inherent in blockchain design is considered one of the core tenets of any transaction-security strategy. It's introduced to create a single source of truth and supports nonrepudiation, which is crucial for accountability of actions. But this still does not guard the platform against attacks leading to an account takeover (ATO), a major threat. There is a clear, exploitable scenario here as once an NFT has been transferred to someone else's wallet or sold, it may not be recovered by the sender or a third party. Enabling private keys to serve as gatekeepers is bound to create concentrated risk in one area, leading to a single-point-of-failure scenario.



Quote for the day:

"Most people live with pleasant illusions, but leaders must deal with hard realities." -- Orrin Woodward

Daily Tech Digest - April 05, 2021

Encrypted method that measures encounters could slow down or prevent future pandemics

Current approaches for mitigating the spread of infectious disease in a population include exposure notification systems, also known as contact tracing, that rely on the pseudonyms. These systems are currently used on smartphones as a way to digitally track if a person comes into contact with someone who has contracted COVID-19. This can help health officials mitigate the spread of the disease by isolating individuals at risk of infecting others. But the benefit of this method that uses encounter IDs is its promotion of privacy. By labeling each encounter with a random number and not linking the encounter to the device the person is carrying, this makes it much harder for a cyber attacker to obtain that user’s identity. The target audience for this approach would be for a smaller population in a controlled setting like NIST‘s campus or nursing homes, said researcher Angela Robinson, also an author of the new paper. “We are advancing a different approach to contact tracing using encounter metrics.” Gathering these measurements of how individuals interact with one another can help with better understanding ways of modifying working environments, such as altering building layouts and establishing mobility rules, so as to slow the spread of disease.


Blockchain and taking the politics out of tech

One of the biggest problems and challenges in the world of crypto is how do you make sure that people who are transacting in crypto are not sending money to terrorists or not using crypto to engage in money laundering. And it’s a problem because the whole promise of crypto is to allow people to transact peer to peer without the need for a bank limit, right? So normally if you’re writing a check, it goes to the banking system and the bank looks to see who the payee is and figure out if they’re on some list or if you’re using cash there are these currency transaction reports you have to fill out. ... Blockchain identity verification is making probabilistic judgments based on a large amount of data. So, it may not know for sure that you’re not Vladimir Putin. But what it does know is that you’re a person who bought a latte at a Starbucks in Palo Alto yesterday of that you’re a person who has a Netflix subscription you’ve been paying on for 23 months And so when we make these probabilistic judgments, we can reduce to a statistical low rate the likelihood that you’re engaged in some kind of malfeasance.


Data lineage: What it is and why it’s important

Data lineage is comprised of methodologies and tools that expose data’s life cycle and help answer questions around who, when, where, why, and how data changes. It’s a discipline within metadata management and is often a featured capability of data catalogs that allow data consumers to understand the context of data they are utilizing for decision-making and other business purposes. One way to explain data lineage is that it’s the GPS of data that provides “turn-by-turn directions and a visual overview of the completely mapped route.” Others view data lineage as a core datagovops practice, where data lineage, testing, and sandboxes are data governance’s technical practices and automation opportunities. Capturing and understanding data lineage is important for several reasons: Compliance requirements: Many organizations must implement data lineage to stay on the good side of government regulators. Data lineage in risk management and reporting is required for capital market trading firms to support BCBS 239 and MiFID II regulations. For large banks, automating extracting lineage from source systems can save significant IT time and reduce risks. In pharmaceutical clinical trials, the ADaM standard requires traceability between analysis and source data.


7 Ways to Reduce Cyber Threats From Remote Workers

This hybrid work model comes with advantages and disadvantages — and among the disadvantages is a sharp rise in the number of cyber threats and vulnerabilities. When employees connect to organizational servers, databases, and intranets via the Internet, they are really working at a remote endpoint of the corporate office. But unlike in office-based environments, they are not as diligently protected. Therefore, CISOs need to view home-based devices as integral parts of IT and mandate that the devices, as well as the people using them, undergo the same level of security as they would when operating from the office. Like any other maturity improvement program, organizations must grapple with the challenges posed by their people (employees, third-party vendors, and so on), processes, and technology and implement the necessary security measures to protect them. ... To avoid breaches, employers need to implement employee training courses with a focus on the latest threat scenarios. Management, operations, and R&D are all prime targets of social engineering, phishing, and scamming campaigns (among other threats). 


How To Remove Ransomware From Android Phone Easily?

First, you will need to restart your phone in safe mode. Different Android phones have different ways in which this method takes place. Find out how to do it on your device. Once you have the right method, your screen will show that your phone is starting in safe mode. When your device is in safe mode, third-party apps are not running. This may or may not include the malware depending on how it is developed. Once your phone is running on safe mode, you can now check your installed apps. You can do this by going to Settings then to Apps. On the list of apps installed in your phone, look for apps that you don’t remember installing. When you find an app that looks suspicious, uninstall it from your phone. Depending on how you use your phone, you may have a long list of apps to go through. Make sure to get all the apps in the device and remove all that are suspicious or don’t use as often. After you are through with the uninstallation process, head to your phone security settings. Here, look for apps under the device administrators section. If you find any apps that are suspicious in this section, deny them the rights to be administrators on your phone and also uninstall them. They may have let the malware in.


The wholesale financial services firm of the future cannot survive without AI

Compliance is the first major front. Regulatory changes come into effect over the course of the next year which require forensic oversight of large amounts of documentation: a task that is too slow, error-prone and expensive to be completed manually. LIBOR, Basel IV and Dodd-Frank QFC recordkeeping requirements place more and more demands on financial services companies and many simply aren’t adequately prepared. ... The second area is market risk. The volatility of markets in the past year means that transparent oversight is critical. This is where AI comes into its own. AI technology can automate the processing and analysis of the documentation which underpins much of the financial system, from loan agreements to insurance policies. This means that work which would previously have entailed long hours can be accelerated, allowing for vastly improved efficiency and speed and, critically, much better oversight of the compliance requirements which regulators mandate. AI gives institutions the ability to remain vigilant and to keep abreast of risks with much more efficiency than ever before. With market conditions likely to remain volatile throughout 2021, fast, responsive and data-backed decisions aren’t only essential for each institution, they are critical for the health of the financial system as a whole.


Fake Unemployment Benefit Websites Preying On Laid-Off Workers, Experts Warn

You may want to take several additional steps to avoid these and other scams, says Sadler, whose company uses artificial intelligence to detect patterns in legitimate and potentially fraudulent emails and to automatically block potential threats. Besides considering an email security system at home or work, Sadler said, “It’s important for people to employ two-factor authentication and to not use the same password across different sites — those are two of the best steps you can take” for better online security. He also suggests getting a password manager, such as RoboForm, 1Password, Keeper, Norton, or a similar tool that can generate your passwords, distribute them across multiple sites, and protect them with encryption software to guard against hackers. Don’t automatically trust an email asking for private information even if the email address looks legitimate, he added. “People may be trained to look out for [bizarre requests],” Sadler said, “and they may be on alert if the email address is unfamiliar. But sometimes the email account itself is compromised, and the phishing email is using a falsified IP address... If you're unsure, you can verify the legitimacy of the sender by calling the organization directly.”


AI at Your Service

From a CX and EX optimization perspective, the point of an AI system is to increase automation efficiencies. If AI can resolve an issue while communicating in a humanlike manner, operations have been optimized effectively and that particular issue doesn’t need to be escalated to a live person and tap into limited resources. ... This also empowers the employees to refocus on more complex, rewarding tasks that require human attention. Let’s look at an example of how AI is utilized in the healthcare industry. A patient comes in with a skin problem. If it’s an anomaly, the doctor may have to do more research, run a series of tests, get a second opinion, etc. Compare that to an AI system, which can look at hundreds and thousands of cases of a similar skin condition and, in a nanosecond, give a diagnosis that’s 90% accurate. That’s a genuine interactive process between a human and an AI system. In addition to reducing costs and freeing up personnel for more business-critical tasks, AI can build brand loyalty for an organization. In Formation.ai’s study, Brand Loyalty 2020: The Need for Hyper-Individualization, 79% of consumers stated that the more personalized tactics a brand uses, the more loyal the customer is to the brand. In fact, 81% of consumers will share basic personal information in exchange for a more personalized customer experience.


What is a streaming database?

Some streaming databases are designed to dramatically reduce the size of the data to save storage costs. They can, say, replace a value collected every second with an average computed over a day. Storing only the average can make long-term tracking economically feasible. Streaming opens up some of the insides of a traditional database. Standard databases also track a stream of events, but they’re usually limited to changes in data records. The sequence of INSERTs, UPDATEs, and DELETEs are normally stored in a hidden journal or ledger inside. In most cases, the developers don’t have direct access to these streams. They’re only offered access to the tables that show the current values. Streaming databases open up this flow and makes it simpler for developers to adjust how the new data is integrated. Developers can adjust how the streams from new data are turned into tabular summaries, ensuring that the right values are computed and saved while the unneeded information is ignored. The opportunity to tune this stage of the data pipeline allows streaming databases to handle markedly larger datasets.


Why Data Democratization Should Be Your Guiding Principle for 2021

Data, and universal access to it, is key for today’s companies to create new opportunities and unlock the value embedded within their organization – all of which can positively impact a company’s top and bottom line. True data democratization pushes organizations to rethink and maybe even restructure, which often means driving a dramatic cultural change in order to realize financial gain. It also means freeing information from the silos created by internal departmental data, customer data, and external data, and turning it into a borderless ecosystem of information. The trouble is many companies aren’t that good at it. Our research last year initially suggested senior decision-makers were confident that they were opening up access to data sufficiently. However, when we scratched a little deeper, we found almost half (46%) of respondents believed that data democratization wasn’t feasible for them. IT infrastructure challenges were cited by almost four out of five respondents as a blocker to democratizing data in their organization. Performance limitations, infrastructure constraints, and bottlenecks are all standing in the way.



Quote for the day:

"Don't necessarily avoid sharp edges. Occasionally they are necessary to leadership." -- Donald Rumsfeld

Daily Tech Digest - April 04, 2021

Compositional AI: The Future of Enterprise AI

Compositionality refers to the ability to form new (composite) services by combining the capabilities of existing (component) services. The existing services may themselves be composite, leading to a hierarchical composition. The concept is not new, and has been studied previously in different contexts; most notably, Web Services Composition and Secure Composition of Security Protocols. Web Services follow the Service Oriented Computing (SOC) approach of wrapping a business functionality in a self-contained Service. There are mainly two approaches to composing a service: dynamic and static. In the dynamic approach, given a complex user request, the system comes up with a plan to fulfill the request depending on the capabilities of available Web services at run-time. In the static approach, given a set of Web services, composite services are defined manually at design-time combining their capabilities. ... In the very primitive world of supervised learning, an AI Service consists of data used to train a model, which is then exposed as an API. There is of course an alternate deployment pipeline, where a trained model can be deployed on an edge device to be executed in an offline fashion.


Four common pitfalls of HyperLedger implementation

One of the main goals of distributed ledger technology (DLT), used by HyperLedger, is decentralization. The nodes (servers) of the network should be spread among all organizations in the consortium and they should not depend on the third party providers. However, we have seen implementations where the whole infrastructure is maintained by one organization or where it is spread among the organizations but all of them host their nodes provided by the same cloud vendor (e.g. AWS). With centralized infrastructure comes the threat that one organization or external provider could easily turn off the system and thus break the principal goal of DLT. ... One of the extremes in defining permissions in the DLT network, contrary to limiting the access of an organization, is privileging one of the organizations in such a way that it can make any changes to the distributed ledger. As such configuration does not have to introduce a vulnerability, it is against blockchain rules. We have met different implementations with this issue that all allowed one organization can freely modify the contents. The channel endorsement policy required the signature only from one organization. 


ETL vs. Data Preparation

ETL relies on a predetermined set of rules and workflows, she said. Potential issues, such as misspellings or extra characters, must be anticipated beforehand so rules for how to deal with those issues can be built into the end-to-end workflow. Conversely, a data prep tool using built-in algorithms is capable of discovery and investigation of the data as it proceeds through the workflow. “For example, algorithms based on machine learning or natural language processing can recognize things that are spelled differently but are really the same.” She gave the example of a city called “St. Louis”, and how it could be entered in multiple ways, or there may be several cities with the same name spelled differently. In an ETL workflow, rules for encountering each particular variation must be programmed ahead of time, and variations not programmed are skipped. A data prep tool can find spelling differences without help, so that the user does not have to anticipate every possible variation. The tool can prompt for a decision on each different variation on the name of this city, providing an opportunity to improve the data before it’s used, she said.


The coming opportunity in consumer lending

The second major step is to build the decision engine. In this area, new entrants will have a large advantage over existing lenders with legacy software that they do not want to alter. The new decision engine can largely be built using advanced analytics, machine learning, and other tools that capitalize on speed and agility. By using machine learning, the new-entrant lenders will be able to automate as much as 95 percent of underwriting processes while also making more accurate credit decisions. Similarly, real-time machine-learning solutions can improve pricing and limit setting and help firms monitor existing customers and credit lines through smarter early-warning systems. Lenders can also use straight-through processing to generate faster transactions and a better customer experience. The design of the decision engine can be modular for maximum flexibility. That will allow lenders to retain control of strategic processes while potentially outsourcing other parts. The modular format can also facilitate risk assessment. This approach involves a series of steps, completely integrated from the front end to the back end, and is designed for objective and quick decision making


WhatsApp Privacy Controversy and India’s Data Protection Bill

Clause 40 of the PDP bill is particularly dangerous and could be detrimental to the data rights of the users of WhatsApp. This provision empowers the Data Protection Authority to include certain data fiduciaries in a regulatory sandbox who would be exempt from the obligation of taking the consent of the data principal in processing their data for up to 36 months. The GDPR does not have any provision related to the regulatory sandbox. Such a sandbox might be required to provide relaxations to certain corporations, such as those that deal with Artificial Intelligence so that they can test their technology in a Sandbox environment. However, it is a commonly accepted practice that in a good regulatory sandbox the users whose data is taken voluntarily participate in the exercise. Such a condition is altogether done away with by this provision. The authority that has to assess the applications for inclusion in a regulatory Sandbox is the Data Protection Authority (DPA). The members of the DPA are to be selected by bureaucrats serving under the Union government. So, it cannot be expected to work independently of government control (Clause 42(2)).


A Data Science Wish List for 2021 and Beyond

Sometimes, we simply cannot overcome the problem of needing more data. It could be that data collection is too expensive or the data is not possible to collect in a reasonable time frame. This is where synthetic data can provide real value. Synthetic data can be created by training a model to understand available data to such an extent that it can generate new data points that look, act, and feel real, i.e. mimic the existing data. An example could be a model that predicts how likely small and medium-sized businesses (SMBs) in the retail sector might be to default on loans. Factors such as location, number of employees, and annual turnover, might be key features in this scenario. A synthetic data model could learn the typical values of these features and create new data points that fit seamlessly into the real dataset, which can then be expanded and used to train an advanced loan default prediction model. ... Another benefit of synthetic data is data privacy. In the financial services industry, much of the data is sensitive and there are many legal barriers to sharing datasets. Leveraging synthetic data is one way we can reduce these barriers as synthetic datapoints feel real but do not relate to real accounts and individuals.


Top 4 Blockchain Risks A CIO Should Know

Blockchain risks lead to malicious activities such as double-spending and record hacking, which means a hacker will try to steal a blockchain participants’ or cryptocurrency owner’s credentials and transfer money to his/her account or hold the credentials as leverage for ransom. As per MIT’s 2019 report, since 2017, hackers have stolen around $2 million worth of cryptocurrency. Another malicious activity is double-spending, where hackers access the majority of the power and rewrite the transaction history. This allows them to spend the cryptocurrency and erase the transaction from history once they receive their orders. With digital money, the hacker can send the merchant a copy of the digital token while retaining the original token and using it again. Implementing and maintaining blockchain applications and platforms is expensive. If there is a fault in the working or the system fails due to the blockchain risks, it will cost a massive amount of money to fix things. A blockchain expert is required to overcome such risks, and the expert may charge a hefty amount to provide solutions.


Top Challenges Involved In Healthcare Data Management

Medical data is sensitive and must adhere to government regulations, such as the Health Insurance Portability and Accountability Act of 1996 (HIPAA) in the US. Data discovery challenges and poor data quality make it much more difficult to perform the required audits, meet regulatory requirements and limit the diversity of data healthcare providers can use for the benefit of patients. Adhering to the HIPAA rules may help in effective data governance. Effective data governance within a healthcare organization can help better manage and use data, create processes for resolving data issues and eventually, and enable users to make decisions based on high-quality information assets. However, all this begins with better data collection and making sure that the data collected is accurate, up-to-date, complete, and in compliance with the HIPAA regulatory standards. A well-designed HIPAA-compliant web form solution can be instrumental in enabling healthcare organizations to manage and streamline data collection processes, including – new patient forms, HIPAA release forms, contact update forms, patient medical history forms, and consent forms.


CDO's Next Major Task: Enabling Data Access for Non-Analysts

Unlike product managers from two decades ago, today's product manager wants to look at the user flow data on the website and design changes to UX flow to improve revenue. He doesn't have the luxury of a dedicated analyst supporting him for every question he has about his product. The marketing manager has direct hands-on access to the CRM system. He is pulling targeted customers for the next campaign and needs to have a lifetime value score for each of the customers to target the highest value customers effectively. To resolve the customer concerns quickly, customer support agents need access to what happened when the customer accessed the website two days ago. He doesn't have the luxury of the SLA of one-week resolution time of yesteryears; the customer expects resolution during the call. The CDO needs a proper plan to enable appropriate access to the right kind of data to the right person, with the right security level. Barring that, the business's numerous stakeholders will start standing up their individual mini data marts to serve their needs. If that happens, the CDO's past five years of centralizing data sources will amount to nothing. What is needed is a proper data access strategy and governance for the entire enterprise.


Why ML should be written as pipelines from the get-go

Data scientists are not trained or equipped to be diligent to care about production concepts such as reproducibility — they are trained to iterate and experiment. They don’t really care about code quality and it is probably not in the best interest of the company at an early point to be super diligent in enforcing these standards, given the trade-off between speed and overhead. Therefore, what is required is an implementation of a framework that is flexible but enforces production standards from the get-go. A very natural way of implementing this is via some form of pipeline framework that exposes an automated, standardized way to run ML experiments in a controlled environment. ML is inherently a process that can be broken down into individual, concrete steps (e.g. preprocessing, training, evaluating, etc), so a pipeline is a good solution here. Critically, by standardizing the development of these pipelines at the early stages, organizations can lose the cycle of destruction/recreation of ML models through multiple toolings and steps, and hasten the speed of research to deployment.



Quote for the day:

“Just because you’re a beginner doesn’t mean you can’t have strength.” -- Claudio Toyama

Daily Tech Digest - April 03, 2021

What Is a Cybersecurity Legal Practice?

A cybersecurity attorney is not an auditor; this attorney does not sit in an ivory tower doing oversight of the company’s information technology work. Instead, corporate officers must recognize that a cybersecurity attorney must be a part of the operational team. The attorney should be as involved in the company’s operations as the information technology expert deploying new defensive measures in the company’s networks. An effective cybersecurity attorney has to be in the trenches, helping to develop the statements of work for new contracts, negotiating information-sharing agreements, advising on legal risks associated with the many and varied daily decisions of securing networks, and managing the hour-by-hour response during an incident. ... Finally, a cybersecurity attorney must be multilingual in the jargon of both law and tech. One of the key jobs of such an attorney is to translate legal requirements (such as obligations imposed by regulations) into design requirements and to understand the technical details enough to ask probing questions, spot legal issues and translate risks to organizational leadership.


3 steps to meeting data privacy regulation compliance through identity programs

This focus on security, however, isn’t just a reaction to more cyberattacks. It also correlates with the enormous acceleration in digital transformation initiatives over the last year. Some industry experts dubbed it the shift from “cloud speed to COVID speed.” The pandemic forced a new way of working, and this ultimately means a new way of ensuring the security of how we work. It also means that companies store and manage more data in the cloud, which comes with its own regulatory compliance challenges. Every new process moved to the cloud, automated or made digital, has become a new vulnerability. Security teams need to manage these vulnerabilities to protect the data from a cyber-attack and ensure compliance with the latest data privacy regulations, such as the General Data Protection Regulation (GDPR) or the California Privacy Rights Act (CPRA). Other non-compliance issues will grow over the next year, especially as companies continue to remotely onboard and offboard customers and employees. These new processes will impact how to protect data and comply with the multiple different patchwork privacy regulations from various states and countries.


Speed and resilience: Five priorities for the next five months

Over the past year, organizations have become well versed in the basics of ensuring a safe working environment. More recently, however, companies have reported that some of their workers appear to be more willing to participate in higher-risk activities simply because they are tired of living with virus restrictions. This will require a different type of intervention and messaging, especially because newer COVID-19 variants pose a high risk and may be transmitted in ways that are not yet fully understood. Employers have a unique societal role to play in vaccination; they are important voices and can help reduce the friction associated with getting the vaccine. Self-reported data from a wide range of organizations point to individual and team productivity being higher than before the onset of the pandemic, but not uniformly so. According to a McKinsey survey, productivity is up for about half of all workers, with the other half reporting no change or lower productivity. The same survey suggested that, while the inability to disconnect is a real concern, increased productivity is correlated to a willingness to change how people work. 


Quantum computing may be able to solve the age-old problem of reasoning

The results show that the quantum machine could use inference models to draw conclusions. Probabilistic inference, which means the incorporation of uncertainty into computer programming, is particularly suited to quantum computers, Fiorentini said, because "quantum models have been proven to be more expressive, easier to train under certain circumstances." In practical terms, this means that quantum computing can be useful to solve both scientific and engineering problems. The results are "quite flexible, surprisingly robust, and can be applied in many fields," said Fiorentini. For instance, he added, Bayesian networks have traditionally been used in predictive maintenance of mission-critical equipment, such as jetliners and jet engines. "You model a system, and then you perform inference on the model by asking certain questions and by figuring out if the system is stable, reliable, and robust--or is about to break down--so you can intervene," Fiorentini said. "And which part is signaling the stress more strongly?" Medical diagnostics is another field that can benefit from these results. Although it can't be exactly applied from the results of this study, "continuing in this direction, some of these techniques are applied to drug discovery," Fiorentini noted.


FBI: APTs Actively Exploiting Fortinet VPN Security Holes

Once exploited, the attackers are moving laterally and carrying out reconnaissance on targets, according to officials. “The APT actors may be using any or all of these CVEs to gain access to networks across multiple critical-infrastructure sectors to gain access to key networks as pre-positioning for follow-on data exfiltration or data encryption attacks,” the warning explained. “APT actors may use other CVEs or common exploitation techniques—such as spear-phishing—to gain access to critical infrastructure networks to pre-position for follow-on attacks.” The joint cybersecurity advisory from the FBI and CISA follows last year’s flurry of advisories from U.S. agencies about APT groups using unpatched vulnerabilities to target federal agencies and commercial organizations. For instance, in October an alert went out that APTs were using flaws in outdated VPN technologies from Fortinet, Palo Alto Networks and Pulse Secure to carry out cyberattacks on targets in the United States and overseas. “It’s no surprise to see additional Fortinet FortiOS vulnerabilities like CVE-2019-5591 and CVE-2020-12812 added to the list of known, but unpatched flaws being leveraged by these threat actors,” said Narang.


How AI-powered BI tools will redefine enterprise decision-making

In this fourth wave, the traditional order of BI will be inverted. The traditional method of BI generally begins with a technical analyst investigating a specific question. For example, an electronics retailer may wonder if a higher diversity of refrigerator models in specific geographies will likely increase sales. The analyst blends relevant data sources (perhaps an inventory management system and a billing system) and investigates whether there is a correlation. Once the analyst has completed the work, they present a conclusion about past behavior. They then create a visualization for business decision makers in a system like a Tableau or Looker, which can be revisited as the data changes. This investigation method works quite well, assuming the analyst asks the right questions, the number of variables is relatively well-understood and finite, and the future continues to look somewhat similar to the past. However, this paradigm presents several potential challenges in the future as companies continue to accumulate new types of data, business models and distribution channels evolve, and real-time consumer and competitive adjustments cause constant disruptions.


How Going Back to Coding After 10 Years Almost Crushed Me

Containers, namely Docker, have really streamlined packaging and reduced env-related issues as you move code thru QA and into production. In the old days, you would develop in a system entirely different than where it was deployed (i.e. code on Windows and deploy to Unix), which invariably led to bugs and more work on each test and release cycle. Also, in the past, a release, QA, or DevOps engineer would take code from an SCM tag and figure out how to compile, test, and migrate it — and usually uncover a whole bunch of hardcoded paths and variables or missing libraries and files that needed to be reworked or hacked up to work. ... I remember fairly long release cycles (as long as three months at a startup). After attending specification meetings to understand the requirements line by line, a developer could go to their desk and play games for a few weeks without having to issue a dreaded update on where they were. Now, you have a daily standup and two-week sprint, so there is no more slacking! The role of the BA has also diminished with Agile, as developers now face users or product managers directly.


3 Reasons In-Memory Computing Is Essential for Microservices

The more advanced in-memory platforms support high-performance multiregion global architectures. This enables zero-downtime business operations via a high-performance shared memory layer that supports them. This also simplifies scaling up these services to more fully leverage the promise of cloud native and serverless. They also provide features such as automated disaster recovery, zero down-time code deployments (blue-green deployments), rolling product upgrades, as well as tools to integrate these seamlessly into modern cloud DevOps automation tools and new AIOps tools that help monitor these architectures and deliver auto-scaling and autonomous troubleshooting. For a concrete example of how these could be employed, imagine having many microservices in an online shopping application These include separate capabilities that power browsing for products, adding and removing items from a shopping cart and so on. More so, each one of these microservices can be somewhat independent from one another. But, some actions like checking out, fulfillment and shipping may require multistep orchestration and some roll-back behavior.


Keeping your data safe from hackers while working from home

One of the big changes the move towards remote working has brought about is removing employees from the protection of the corporate firewall. Working from inside the office provides people with anti-virus and other protections that can help to filter out some attacks. Now, instead of this, many people are working from their own computer from their homes, where they may not have anti-virus at all – and their home router won't provide a robust defence against attackers like a corporate firewall would. Criminals know this and are looking to take advantage with cyberattacks, especially when people – rushed off their feet while balancing working from home with the rest of their life – might unintentionally click on a phishing link or respond to a request that appears to come from a colleague but is actually a cyber criminal. "Humans are are ultimately fallible. Unfortunately it's the organic matter behind the keyboard, which is often the vulnerable part of the loop," says Troy Hunt, creator of HaveIBeenPwned and digital advisor to Nord Security. 


Booking.Com's GDPR Fine Should Serve as 'Wake-Up Call'

While the incident itself was troubling, the Dutch Data Protection Authority called out Booking.com for its response to the breach. The company, according to the report, first found out about the security lapse on Jan. 13, 2019, but waited until Feb. 7 of that year to alert authorities. Under GDPR rules, organizations must report a breach within 72 hours of its occurrence. By the time Booking.com notified the Dutch Data Protection Authority, more than 20 days had elapsed. Monique Verdier, the vice president of the Dutch privacy watchdog, noted in the report that the delay in reporting the incident could have put additional customers at risk and showed a disregard for their data. "That speed is very important for the victims of a leak," Verdier said. "After such a report, the AP can, among other things, order a company to immediately warn affected customers. In this way, for example, to prevent criminals from having weeks to continue trying to defraud customers." A spokesman for the company could not be immediately reached for comment on Friday, but the report notes that Booking.com would not appeal the fine.



Quote for the day:

"Enjoy the little things, for one day you may look back and realize they were the big things." -- Robert Brault