Daily Tech Digest - November 18, 2020

ThreatList: Pharma Mobile Phishing Attacks Turn to Malware

“The reason that mobile devices have become a primary target is because a well-crafted attack can be close to impossible to spot,” said Schless. “Mobile devices have smaller screens, simplified user interfaces, and people generally exercise less caution on them than they do on computers.” Meanwhile, while previously cybercriminals were relying on phishing attacks that attempted to carry out credential harvesting, in 2020, the aim shifted to malware delivery. For instance, in the fourth quarter of 2019, 83 percent of attacks aimed to launch credential harvesting while 50 percent aimed to deliver malware. However, in the first quarter of 2020, only 40 percent of attacks targeted credentials, while 78 percent aimed to deliver malware. And, in the third quarter of 2020, 27 percent targeted credentials, and 81 percent looked to load malware. Researchers believe that this shift signifies that attackers are investing in malware more for pharmaceutical companies. For one, successful delivery of spyware or surveillanceware to a device could result in longer-term success for the attacker. Furthermore, said researchers, attackers want to be able to observe everything the user is doing and look into the files their device accesses and stores. ...”


Don't put data science notebooks into production

Putting a notebook into a production pipeline effectively puts all the experimental code into the production code base. Much of that code isn't relevant to the production behavior, and thus will confuse people making modifications in the future. A notebook is also a fully powered shell, which is dangerous to include inside a production system. Safe operations require reproducibility and auditability and generally eschews manual tinkering in the production environment. Even well intentioned people can make a mistake and cause unintended harm. What we need to put into production is the concluding domain logic and (sometimes) visualizations. In most cases, this isn't difficult since most notebooks aren't that complex. They only encourage linear scripting, which is usually small and easy to extract and put into a full codebase. If it's more complex, how do we even know that it works? These scripts are fine for a few lines of code but not for dozens. You’ll generally want to break that up into smaller, modular and testable pieces so that you can be sure that it actually works and, perhaps later, reuse code for other purposes without duplication. So we’ve argued that having notebooks running directly in production usually isn’t that helpful or safe. It’s also not hard to incorporate into a structured code base.


Why the CMO and CIO are no longer strange bedfellows

The CIO’s mandate is all systems, both customer-facing and internal. We know that more and more this involves capturing and interpreting market and customer data through artificial intelligence derived from data sensors. In turn, IT leaders supply the capabilities needed to meet Line of Business demands for agility and speed. The CMO’s mandate is to apply the derived customer intelligence, needs, and habits, and profile customers down to the individual level, to create an experience that meets the customer wherever, whenever, and on any device. Understanding the customer is therefore central to both mandates. The CIO needs to connect technology capabilities all the way from the customer interaction back to the workload related to the customer, sitting on the chosen infrastructure platform. The CMO needs an entire profile of the customer, and the CIO builds the systems in order to create the profile. In the current climate, businesses who fail to understand the importance of the digital customer experience will undoubtedly fall behind. Embracing the customer as a digital experience is essential for business competitiveness and even survival.


Understanding Microsoft .NET 5

Technically this new release should be .NET Core 4, but Microsoft is skipping a version number to avoid confusion with the current release of the .NET Framework. At the same time, moving to a higher version number and dropping Core from the name indicates that this is the next step for all .NET development. Two projects still retain the Core name: ASP.NET Core 5.0 and Entity Framework Core 5, since legacy projects with the same version numbers still exist. It’s an important milestone, marking the point where you need to consider starting all new projects in .NET 5 and moving any existing code from the .NET Framework. Although Microsoft isn’t removing support from the .NET Framework, it’s in maintenance mode and won’t get any new features in future point releases. All new APIs and community development will be in .NET 5 (and 2021’s long-term support .NET 6). Some familiar technologies such as Web Forms and the Windows Communication Foundation are being deprecated in .NET 5. If you’re still using them, it’s best to remain on .NET Framework 4 for now and plan a migration to newer, supported technologies, such as ASP.NET’s Razor Pages or gRPC. There are plans for community support for alternative frameworks that will offer similar APIs


Top 8 trends shaping digital transformation in 2021

Consumers want consistent engagement with brands across their preferred channels. Seventy-three percent of shoppers use more than one channel during their shopping journey. Per Deloitte, seventy-five percent of consumers expect consistent interactions across all departments of a company. Eighty-six percent of consumers say they want the ability to move between channels when talking to a brand. Ninty-two percent of customers are satisfied using live chat services -- making it the support channel that leads to the highest customer satisfaction. And 78% of consumers use mobile devices to connect with brands for customer service -- the number jumps to 90% of Millennials. Organizations need to invest in new digital methods of customer service. ... Research shows that Lines of business (LoBs) are participating in digital transformation with 68% of LoB users believe IT and LoBs should jointly drive digital transformation. In addition, 51% of LoB users are frustrated at the speed their organizations' IT department can deliver digital projects. Outside of IT, the top three business roles with integration needs include business analysts, data scientists, and customer support.


Q&A on the Book Virtual Teams Across Cultures

Firstly, it is important to understand the meaning of culture. In the book, I go into more detail, but for now we can say that culture is the meaning that a group of people give to understand life and interpret their experience. Culture is a social construct, meaning that it develops through the interaction of people. As humans, we are influenced by many cultures, such as company culture. The book focuses on country or location culture. When we work with people from the same culture, things tend to go smoothly. In general, we understand each other’s communication style, work approach, reactions and ideas. It all makes sense because the assumptions that drive us are similar. However, when we meet someone from a different culture, we may not understand or we may be surprised by their communication style, work approach, reactions and ideas. The assumptions that drive their behavior are fundamentally different. This is what we call culture shock – that feeling of confusion because the other person does not make sense to us. People who work internationally have most likely experienced culture shock. The critical aspect is how we respond to it. 


Can Low Code Measure Up to Tomorrow’s Programming Demands?

There is some disagreement on whether AI and machine learning will be able to write code, says Forrester’s Jeffrey Hammond, vice president and principal analyst serving CIO professionals. “One camp is saying, ‘In the future, AI is going to write a lot of the code that developers might write today,’” he says. That could lead to less demand for developers, with fewer positions to be filled. The counter view, Hammond says, is that software development is a creative process and profession. For all its capabilities, AI has limits that might not match the novel thinking of developers, he says. “Some of the most valuable code that’s written is also the most creative code.” Today AI is used successfully in testing, Hammond says, which many developers might be loath to writing test cases for. He sees market adjacencies to that with development tools such as Microsoft Visual Studio that has a feature that can predict what a developer may type next, then make that available for the developer to click. “You’ve got examples of where these tools are augmenting developers’ working habits and making them more productive,” Hammond says. In the creative space, Adobe Sensei technology can help designers automate tedious tasks, he says, such as stitch together photos or remove undesired artifacts from content.


Vulnerability Prioritization Tops Security Pros' Challenges

This should come as no surprise to anyone working in software development. Software development organizations are using more application security tools than ever before and from the earliest stages of development. Most are on top of detection, but that's only the first step. Next comes prioritization: Once you've detected the security issues, how can you make sure you are addressing the most critical issues first? While prioritization is essential for organizations that want to get ahead of their backlog, they are still struggling to formulate a standardized prioritization process. Even though vulnerability prioritization rated very high on application security professionals' list of top challenges, the WhiteSource survey found that most security and development teams don't follow a shared process for prioritization. The survey asked to what extent the security and development teams in their organization agree on which vulnerabilities need to be fixed, and the results were concerning: 58% of respondents said they sometimes agree, but each team follows ad hoc practices and separate guidelines. Only 31% of respondents said they have an agreed-upon process to determine priorities.


Fast-Tracking AI Ethics Is Dicey And Shortsighted, Especially For Self-Driving Cars

Somehow, there needs to be a balance found that can appropriately make use of the AI Ethics precepts and yet allow for flexibility when there is a real and fully tangible basis to partially cut corners, as it were. Of course, some would likely abuse the possibility of a slimmer version and always go that route, regardless of any truly needed urgency of timing. Thus, there is a chance of opening a Pandora’s box whereby a less-than fully AI Ethics protocol becomes the default norm, rather than serving as a break-glass exception when rarely so needed. It can be hard to put the Genie back into the bottle. In any case, there are already some attempts at trying to craft a fast-track variant of AI Ethics principles. We can perhaps temper those that leverage the urgent version with both a stick and a carrot.The carrot is obvious that they are seemingly able to get their AI completed sooner, while the stick is that they will be held wholly accountable for not having taken the full nine yards on the use of the AI Ethics. This is a crucial point that might be used against those taking such a route and be a means to extract penalties via a court of law, along with penalties in the court of public opinion.


How to boost your enterprise's immunity with cyber resilience

Cyber security and cyber resilience are often used interchangeably. While they are related concepts, they're far from being synonyms, and it's crucial for everyone to understand the difference. Security is like wearing a mask or using other forms of personal protective equipment to reduce your risk of being infected with a virus. Resiliency is, after having been infected, fighting through the illness and giving your body a chance to return to good health. This means that cyber security is the protection and restoration of IT assets—hardware and software, in the cloud and on premises—and the data they contain, to ensure their availability and integrity. Resiliency, on the other hand, focuses on the ability of the business to withstand and recover from these breaches. The scope extends beyond IT and information to business operations and processes. The U.S. National Institute of Standards and Technology (NIST) defines cyber resilience as "the ability of an information system to continue to operate under adverse conditions or stress, even if in a degraded or debilitated state, while maintaining essential operational capabilities; and to recover to an effective operational posture in a time frame consistent with mission needs."



Quote for the day:

"Limitations live only in our minds. But if we use our imaginations, our possibilities become limitless." -- Jamie Paolinetti

Daily Tech Digest - November 17, 2020

SD-WAN needs a dose of AIOps to deliver automation

In some ways, SD-WAN exacerbates the troubleshooting problem. It adds a level of resiliency to the network via multi-path networking that can hide outages. This leads to a situation where the network operations dashboard can show everything is "green," but apps are performing poorly. Network performance issues have become glaringly obvious with the rise of video, and they are causing network engineers to constantly scramble to try and remediate issues. Here is where AI can make a difference. AI systems can ingest the massive amounts of data provided by network infrastructure (LAN, WLAN and WAN) to "see" things that even the savviest network engineer can't see. At one time, when networks were fairly simple and traffic volumes were lower, it was possible for a seasoned network professional to "know" a network and quickly find the root of problems through a combination of domain knowledge and rapid inspection of traffic. But not so today as the numbers of devices, applications and volume of information have skyrocketed. One of the big changes is that periodic polling data has been replaced by real-time streaming telemetry that increases data by an order of magnitude or more.


Ripe for digital disruption: Which industries are most at risk and why

The changing demographics favor workers who are much more open to gig work and who place greater trust in digital platforms to create marketplaces. This has opened the door to changes in typically cohesive industries, such as higher education. The increased demand for digital skills has led many students to decouple academic interest and professional credentialing. This will lead to an exodus from costlier schools in favor of boutique schools that cater to narrower interests. Students will earn digital credentials from specific, technology-heavy institutions like Lambda School in their early career, and pursue further growth and learning throughout their career from organizations such as Coursera or LinkedIn Learning. Generation Z has grown up with democratized value creation, like YouTube channels or Twitch streamers that organically found their base and built their audience using digital techniques. These new, digital entities can see the most valuable part of a business process and align themselves to those while sourcing out the other aspects with great velocity. Tesla, for example, has done away with its PR department and is relying on its outspoken CEO to directly message the market.


The seven elements of successful DDoS defence

Because multiple computers from a globally dispersed botnet “zombie army” of hijacked internet-connected devices are attempting to flood a server with fake traffic to knock it offline, DDoS attacks are already more destructive than Denial of Service (DoS) attacks perpetrated from one machine. However, in recent years we’ve monitored a disturbing trend: DDoS used as a smokescreen. The service disruption draws the IT team’s attention away from a separate and more sophisticated incursion, such as account takeover or phishing. The damage of just the DDoS can be bad enough. It takes a targeted website minutes to go down in a strike, but hours to recover. In fact, 91% of organisations have experienced downtime from a DDoS attack, with each hour of downtime costing an average of $300,000. Beyond the revenue loss, DDoS can erode customer trust, force businesses to spend large amounts in compensations, and cause long-term reputational damage; particularly if it leads to other breaches. ... A comprehensive defence is essential, but with attacks ranging from massive volumetric bombardments to sophisticated and persistent application layer threats, what are the most important elements of potential solutions to consider?


Breakdown of a Break-in: A Manufacturer's Ransomware Response

At the 2020 (ISC)² Security Congress, SCADAfence CEO Elad Ben-Meir took the virtual stage to share details of a targeted industrial ransomware attack against a large European manufacturer earlier this year. His discussion of how the attacker broke in, the collection of forensic evidence, and the incident response process offered valuable lessons to an audience of security practitioners. The firm learned of this attack late at night when several critical services stopped functioning or froze altogether. Its local IT team found ransom notes on multiple network devices and initially wanted to pay the attackers; however, after the adversaries raised their price, the company contacted SCADAfence's incident response team. ... Before it arrived on-site, the incident response team instructed the manufacturer to contain the threat to a specific area of the network and prevent the spread of infection, minimize or eliminate downtime of unaffected systems, and keep the evidence in an uncontaminated state. "The initial idea was to try to understand where this was coming from, what machines were infected and what machines those machines were connected to, and if there was the ability to propagate additionally from there," said Ben-Meir in his talk.


Sustainability: The growing issue of supply chain disruption

There is likely to be more disruption ahead as extreme weather events appear to be on the rise. According to McKinsey, climate disruptions to supply chains are going to become increasingly frequent and more severe. Kern said: “It’s a mathematical effect that the number of natural catastrophes has been increasing massively in recent years. If you look at Hurricanes Katrina, Harvey, Irma and Maria as well as the Japanese earthquake and the Thai floods you can see that we are getting loss events far above the previous average of around $50bn. We’re seeing nat cats causing losses up to $150bn of insured value, so as you can imagine this is a very big concern for us.” Baumann pointed out that as well as more extreme weather, other future trends could play a role. He said: “There are several drivers of disruption. The complexity of supply chains is increasing, and more complexity means more potential points of failure. Even simple goods can have as many as ten suppliers. That in turn adds to the risk that transportation and production may be disrupted.” At the same time, practices such as just-in-time delivery or lean manufacturing can also introduce risks, particularly when organisations are focused purely on reducing costs.


Figuring out programming for the cloud

The trick, says Rosoff, is to give the programmer enough of a language to express the authorization rule, but not so much freedom that they can break the entire application if they have a bug. How does one determine which language to use? Rosoff offers three decision criteria: Does the language allow me to express the complete breadth of programs I need to write? (In the case of authorization, does it let me express all of my authZ rules?); Is the language concise? (Is it fewer lines of code and easier to read and understand than the YAML equivalent?); Is the language safe? (Does it stop the programmer from introducing defects, even intentionally?). We still have a ways to go to make declarative languages the easy and obvious answer to infrastructure-as-code programming. One reason developers turn to imperative languages is that they have huge ecosystems built up around them with documentation, tooling, and more. Thus it’s easier to start with imperative languages, even if they’re not ideal for expressing authorization configurations in IaC. We also still have work to do to make the declarative languages themselves approachable for newbies. This is one reason Polar, for example, tries to borrow imperative syntax.


A Cloud-Native Architecture for a Digital Enterprise

Cloud-native applications are all about dynamism, and microservice architecture (MSA) is critical to accomplish this goal. MSA helps to divide and conquer by deploying smaller services focusing on well-defined scopes. These smaller services need to integrate with different software as a service (SaaS) endpoints, legacy applications, and other microservices to deliver business functionalities. While microservices expose their capabilities as simple APIs, ideally, consumers should access these as integrated, composite APIs to align with business requirements. A combination of API-led integration platform and cloud-native technologies helps to provide secured, managed, observed, and monetized APIs that are critical for a digital enterprise. The infrastructure and orchestration layers represent the same functionality that we discussed in the cloud-native reference architecture. Cloud Foundry, Mesos, Nomad, Kubernetes, Istio, Linkerd, and OpenPaaS are some examples of current industry-leading container orchestration platforms. Knative, AWS Lambda, Azure Functions, Google Functions, and Oracle Functions are a few examples of functions as a service platform (FaaS).


New streaming and digital media rules by Indian government rattles industry

So, what exactly does rule this portend? It's not entirely clear. To some who earn their bread and butter monitoring these industries, the prognosis is dire. Nikhil Pahwa, a digital rights activist and founder of prominent website MediaNama that writes about these industries said this to the Guardian: "The fear is that with the Ministry of Information and Broadcasting -- essentially India's Ministry of Truth -- now in a position to regulate online news and entertainment, we will see a greater exercise of government control and censorship." If this becomes reality it would wreck the plans of companies such as Netflix and Amazon that have seen their fortunes rise dramatically in the last few years with the spectacular boom of smartphones and cheap data, both goldmines that keep on giving. The COVID era has only added more fuel to this trend. Eager to capitalise on this nascent market, Netflix has already pumped $400 million into the country and amassed 2.5 million precious subscribers. Consulting outfit PwC predicts that India's media and entertainment industry will grow at a brisk 10.1% clip annually to reach $2.9 billion by 2024. 


Executive Perspective: Privacy Ops Meets DataOps

PrivacyOps is emerging because privacy considerations can no longer be an afterthought in an organization’s software development lifecycle -- they need to be tightly integrated. There is pressure on organizations to prove they are taking responsibility for personal data and acting in compliance with regulations, and it’s only going to increase. The real opportunity that the emergence of PrivacyOps presents is bringing security and privacy processes together, and standardizing best practices that need to be implemented across organizations. It’s far too easy for engineering, analytics, and compliance teams to talk over each other. Bringing these domains together through software will help to set expectations across the industry about privatizing data assets. Techniques such as k-anonymization, for example, are practiced by some of the best teams in healthcare, but they are hardly commonplace, despite being relatively easy to implement. To deliver compliant analytics, you need data engineers that can reliably ship the data from place to place, while implementing the appropriate transformations. However, what actually needs to be done is often not very clear to the engineering team. Data scientists want as much data as possible; compliance teams are pushing to minimize the data footprint. Regulations are in flux and imprecise.


2021 predictions for the Everywhere Enterprise

While people will eventually return to the office, they won’t do so full-time, and they won’t return in droves. This shift will close the circle on a long trend that has been building since the mid-2000s: the dissolution of the network perimeter. The network and the devices that defined its perimeter will become even less special from a cybersecurity standpoint. ... Happy, productive workers are even more important during a pandemic. Especially as on average, employees are working three hours longer since the pandemic started, disrupting the work-life balance. It’s up to employers to focus on the user experience and make workers’ lives as easy as possible. When the COVID-19 lockdown began, companies coped by expanding their remote VPN usage. That got them through the immediate crisis, but it was far from ideal. On-premises VPN appliances suffered a capacity crunch as they struggled to scale, creating performance issues, and users found themselves dealing with cumbersome VPN clients and log-ins. It worked for a few months, but as employees settle in to continue working from home in 2021, IT departments must concentrate on building a better remote user experience.



Quote for the day:

"At first dreams seem impossible, then improbable, then inevitable." -- Christopher Reeve

Daily Tech Digest - November 16, 2020

System brings deep learning to “internet of things” devices

To run that tiny neural network, a microcontroller also needs a lean inference engine. A typical inference engine carries some dead weight — instructions for tasks it may rarely run. The extra code poses no problem for a laptop or smartphone, but it could easily overwhelm a microcontroller. “It doesn’t have off-chip memory, and it doesn’t have a disk,” says Han. “Everything put together is just one megabyte of flash, so we have to really carefully manage such a small resource.” Cue TinyEngine. The researchers developed their inference engine in conjunction with TinyNAS. TinyEngine generates the essential code necessary to run TinyNAS’ customized neural network. Any deadweight code is discarded, which cuts down on compile-time. “We keep only what we need,” says Han. “And since we designed the neural network, we know exactly what we need. That’s the advantage of system-algorithm codesign.” In the group’s tests of TinyEngine, the size of the compiled binary code was between 1.9 and five times smaller than comparable microcontroller inference engines from Google and ARM. TinyEngine also contains innovations that reduce runtime, including in-place depth-wise convolution, which cuts peak memory usage nearly in half. After codesigning TinyNAS and TinyEngine, Han’s team put MCUNet to the test.


Beyond the Database, and Beyond the Stream Processor: What's the Next Step for Data Management?

The breadth of database systems available today is staggering. Something like Cassandra lets us store a huge amount of data for the amount of memory the database is allocated; Elasticsearch is different, providing a rich, interactive query model; Neo4j lets us query the relationship between entities, not just the entities themselves; things like Oracle or PostgreSQL are workhorse databases that can morph to different types of use case. Each of these platforms has slightly different capabilities that make it more appropriate to a certain use case but at a high level, they’re all similar. In all cases, we ask a question and wait for an answer. This hints at an important assumption all databases make: data is passive. It sits there in the database, waiting for us to do something. This makes a lot of sense: the database, as a piece of software, is a tool designed to help us humans — whether it's you or me, a credit officer, or whoever — interact with data.  But if there's no user interface waiting, if there's no one clicking buttons and expecting things to happen, does it have to be synchronous? In a world where software is increasingly talking to other software, the answer is: probably not.


Data warehousing workloads at data lake economics with lakehouse architecture

Data lakes in the cloud have high durability, low cost, and unbounded scale, and they provide good support for the data science and machine learning use cases that many enterprises prioritize today. But, all the traditional analytics use cases still exist. Therefore, customers generally have, and pay for, two copies of their data, and they spend a lot of time engineering processes to keep them in sync. This has a knock-on effect of slowing down decision making, because analysts and line-of-business teams only have access to data that’s been sent to the data warehouse rather than the freshest, most complete data in the data lake. ... The complexity from intertwined data lakes and data warehouses is not desirable, and our customers have told us that they want to be able to consolidate and simplify their data architecture. Advanced analytics and machine learning on unstructured and large-scale data are one of the most strategic priorities for enterprises today, – and the growth of unstructured data is going to increase exponentially – therefore it makes sense for customers to think about positioning their data lake as the center of data infrastructure. However, for this to be achievable, the data lake needs a way to adopt the strengths of data warehouses.


What to Learn to Become a Data Scientist in 2021

Apache Airflow, an open source workflow management tool, is rapidly being adopted by many businesses for the management of ETL processes and machine learning pipelines. Many large tech companies such as Google and Slack are using it and Google even built their cloud composer tool on top of this project. I am noticing Airflow being mentioned more and more often as a desirable skill for data scientists on job adverts. As mentioned at the beginning of this article I believe it will become more important for data scientists to be able to build and manage their own data pipelines for analytics and machine learning. The growing popularity of Airflow is likely to continue at least in the short term, and as an open source tool, is definitely something that every budding data scientist should at learn. ... Data science code is traditionally messy, not always well tested and lacking in adherence to styling conventions. This is fine for initial data exploration and quick analysis but when it comes to putting machine learning models into production then a data scientist will need to have a good understanding of software engineering principles. If you are planning to work as a data scientist it is likely that you will either be putting models into production yourself or at least be involved heavily in the process.


WhatsApp Pay: Game changer with new risks

The payment instruction itself is a message to the partner bank, which then triggers a normal UPI transaction from the customer’s designated UPI bank to the destination partner bank through the National Payments Corporation of India (NPCI). The destination partner bank forwards the payment to the addressee’s default UPI bank registered with WhatsApp. A confirmation of credit is also sent through WhatsApp and reaches the message box of the recipient. It is possible that at either end, the WhatsApp partner bank may not be the customer’s bank. Hence, there may be the involvement of four banks, the NPCI and WhatsApp in completing the transaction. As far as the user is concerned, the system is managed by WhatsApp and none of the other players is visible. Though WhatsApp is not licensed to undertake UPI transactions directly, it engages the services of its partner banks to initiate the transaction. As these partner banks are not bankers for the customers, they engage two more banks to assist them. Finally, NPCI acts as the agent of the two banks through which the money actually passes through to the right bank. Thus, there is a chain of principal agent transaction and the roles of the customer, WhatsApp, banks, etc., need to be clarified. 


New Circuit Compression Technique Could Deliver Real-World Quantum Computers Years Ahead of Schedule

“By compressing quantum circuits, we could reduce the size of the quantum computer and its runtime, which in turn lessens the requirement for error protection,” said Michael Hanks, a researcher at NII and one of the authors of a paper, published on November 11, 2020, in Physical Review X. Large-scale quantum computer architectures depend on an error correction code to function properly, the most commonly used of which is surface code and its variants. The researchers focused on the circuit compression of one of these variants: the 3D-topological code. This code behaves particularly well for distributed quantum computer approaches and has wide applicability to different varieties of hardware. In the 3D-topological code, quantum circuits look like interlacing tubes or pipes, and are commonly called “braided circuits. The 3D diagrams of braided circuits can be manipulated to compress and thus reduce the volume they occupy. Until now, the challenge has been that such “pipe manipulation” is performed in an ad-hoc fashion. Moreover, there have only been partial rules for how to do this. “Previous compression approaches cannot guarantee whether the resulting quantum circuit is correct,” said co-author Marta Estarellas, a researcher at NII.


Microsoft Warns: A Strong Password Doesn’t Work, Neither Does Typical MFA 

“Remember that all your attacker cares about is stealing passwords...That’s a key difference between hypothetical and practical security.” — Microsoft’s Alex Weinert In other words, the bad guys will do whatever is necessary to steal your password and a strong password isn’t an obstacle when criminals have a lot of time and a lot of tools at their disposal. ... MFA based on phones, aka publicly switched telephone networks or PSTN, is not secure, according to Weinert. (What is typical MFA? It’s when, for example, a bank sends you a verification code via a text message.) “I believe they’re the least secure of the MFA methods available today,” Weinert wrote in a blog. “When SMS (texting) and voice protocols were developed, they were designed without encryption...What this means is that signals can be intercepted by anyone who can get access to the switching network or within the radio range of a device,” Weinert wrote. Solution: use app-based authentication. For example, Microsoft Authenticator or Google Authenticator. An app is safer because it doesn’t rely on your carrier. The codes are in the app itself and expire quickly.


Defining data protection standards could be a hot topic in state legislation in 2021

Once the immediacy of both the pandemic dissipates and the political heat cools, cybersecurity issues will likely surface again in new or revived legislation in many states, even if weaved throughout other related matters. It’s difficult to separate cybersecurity per se from adjoining issues such as data privacy, which has generally been the biggest topic to involve cybersecurity issues at the state level over the past four years. “You really don’t have this plethora of state cybersecurity laws that would be independent of their privacy law brethren,” Tantleff said. According to the National Conference of State Legislatures, at least 38 states, along with Washington, DC, and Puerto Rico introduced or considered more than 280 bills or resolutions that deal significantly with cybersecurity as of September 2020. Setting aside privacy and some grid security funding issues, there are two categories of cybersecurity legislative issues at the state level to watch during 2021. The first and most important is spelling out more clearly what organizations need to meet security and privacy regulations. The second is whether states will pick up election security legislation left over from the 2020 sessions.


The Case for Combining Next Generation Tech with Human Oversight

Human error is the main cause of security breaches, wrong data interpretation, mistaken insights, and a variety of other damning experiences the insights industry has had to wade through ever since its conception. Zooming out to take a wider look, human error is the cause of mistaken elections, aviation accidents, cybersecurity issues, etc. but also scientific breakthroughs across the world. While some mistakes yield true results, most have dangerous consequences that could have been avoided if we were more careful. To err is human, but in an industry where mistakes have real-world consequences, to err is to potentially cost a business it’s life. If we stick with the artificial intelligence and automation example, automated processes with next generation technology are the most poignant example of humans trying to make up for their mistakes and can help minimise human error at all stages ... The main benefit of combining human oversight with this next generation technology, is that we can catch and fix any bugs that arise before they harm the research process and projects that rely on said technology. But we need to be wary that humans cannot catch every mistake, and when one slips through that is when oversight takes on a whole new, disappointing meaning.


Important Considerations for Pushing AI to the Edge

The decision on where to train and deploy AI models can be determined by balancing considerations across six vectors: scalability, latency, autonomy, bandwidth, security, and privacy. In terms of scalability, in a perfect world, we’d just run all AI workloads in the cloud where compute is centralized and readily scalable. However, the benefits of centralization must be balanced out with the remaining factors that tend to drive decentralization. For example, if you depend on edge AI for latency-critical use cases and for which autonomy is a must, you would never make a decision to deploy a vehicle’s airbag from the cloud when milliseconds matter, regardless of how fast and reliable your broadband network may be under normal circumstances. As a general rule, latency-critical applications will leverage edge AI close to the process, running at the Smart and Constrained Device Edges as defined in the paper. Meanwhile, latency-sensitive applications will often take advantage of higher tiers at the Service Provider Edge and in the cloud because of the scale factor. In terms of bandwidth consumption, the deployment location of AI solutions spanning the User and Service Provider Edges will be based on a balance of the cost of bandwidth, the capabilities of devices involved and the benefits of centralization for scalability.



Quote for the day:

"If you want to do a few small things right, do them yourself. If you want to do great things and make a big impact, learn to delegate." -- John C. Maxwell

Daily Tech Digest - November 14, 2020

Data Scientist vs Business Analyst. Here’s the Difference.

Perhaps the biggest similarity of Business Analyst to Data Scientist is the words itself to describe the role. A Data Scientist is expected to perform business analytics in their role as it is essentially what dictates their Data Science goals. A Business Analyst can expect to focus not on Machine Learning algorithms to solve business problems, but instead on surfacing anomalies, shifts and trends, and key points of interest for a business. ... Of course, there are some key differences between these two roles. One of the biggest differences is the use of Machine Learning for Data Scientists only. Another difference is that a Business Analyst can expect to communicate more to stakeholders than a Data Scientist would (sometimes Data Scientist work can be more heads down and not involve as many meetings). Here is a summary of the differences you can expect to find between these positions. ... These two roles share goals with one another. Each requires a deep dive into data with similar tools as well. The process of communication is similar, too — working with stakeholders from the company to go over the business problem, solution, results, and impact. Here is a summary of the key similarities between a Data Scientist and a Business Analyst.


CISA Director Expects to Be Fired Following Secure Election

US officials delivered a statement emphasizing the security of this year's election as news of these firings began to unfold. Members of the Election Infrastructure Government Coordinating Council (GCC) Executive Committee and the Election Infrastructure Sector Coordinating Council (SCC) say this election "was the most secure in American history." Across the country, they add, officials are reviewing the election process, and states with close calls will recount ballots. "This is an added benefit for security and resilience," they wrote. "This process allows for the identification and correction of any mistakes or errors. There is no evidence that any voting system deleted or lost votes, changed votes, or was in any way compromised." Security measures included pre-election testing, state certification of voting equipment, and the US Election Assistance Commission's (EAC) certification of voting equipment contribute to confidence in voting systems used in 2020, they said. Officials acknowledged the "many unfounded claims and opportunities for misinformation" about the election process and emphasize they have the "utmost confidence" in the election's security and integrity.


Security Awareness: Preventing Another Dark Web Horror Story

Our research from last year has already revealed that 1 in 4 people would be willing to pay to get their private information taken down from the dark web – and this number jumps to 50% for those who have experienced a hack. While only 13% have been able to confirm whether a company with which they’ve interacted has been involved in a breach, the reality is it’s much more likely than you’d think – since 2013, over 9.7 billion data records have been lost or stolen, and this number is only rising. Most of us would have no way of knowing whether our information is up for sale online. However, solutions now exist which proactively check for email addresses, usernames and other exposed credentials against third-party databases, alerting users should any leaked information be found. ...  Detection is undoubtedly pivotal in keeping ahead of fraudsters, but the foundations begin with awareness. The majority of breaches take place as a result of simple mistakes which can be easily addressed – using your Facebook password at work or failing to change the default settings of connected devices. But at the same time, businesses must stress the importance of being cyber-aware and foster a culture of security awareness throughout the organisation.


14 Finance Specialists Share Their Largest Fintech Predictions For 2021

There can be extra “bank in a box” tech layers between fintech and banks to allow spinning up partnerships on a sooner timeline. I additionally see extra back-end firms to automate important compliance capabilities akin to Know Your Buyer and regulatory change administration. I additionally assume we are going to see much more “regular” firms providing monetary providers in addition to growing consolidation amongst fintech firms. – Jeanette Fast... An enormous development that might be seen is a renewed want for monetary literacy. Covid-19 compelled everybody to consider each their long- and short-term monetary outlooks. What now we have seen within the auto refinancing sector is that individuals don’t even know you possibly can refinance a car. You’ll discover customers who need to sharpen their funds and firms that can be making an attempt to achieve and educate them. – Tom Holgate, ... The rise of insurance coverage tech will revolutionize the medical insurance trade, with improvements starting from digital well being information to monitoring health. The rise of good contracts offers insurance coverage firms a solution to replace their infrastructure and minimize long-term prices whereas offering shoppers with superior service. – Joseph Safina


How to Keep Up With Big Tech's Hiring Spree

If you’re realizing you need more tech skills to handle the new digital demands of your industry, look first at your existing workforce. Instead of spending time and money on hiring, look for ways to upskill employees interested in a more technical career path and have demonstrated an aptitude for learning. For example, someone in an administrative role who has quickly adapted to remote work might be a good candidate for a scrum master or project management role. If you don’t have the ability to train employees in-house, consider a partnership. ... Hiring, in general, is starting to pick up again. When the pandemic finally subsides and companies begin hiring in full force, most will be looking for talent in the same places. Instead of sourcing recent college grads, look for graduates from coding boot camps and other alternative skilling programs, or target self-taught learners. This crisis has demonstrated that online learning isn’t just possible; it’s a critical part of today’s young people’s development. The talent acquisition team at IBM has made a point to target so-called “new collar” workers to bolster its 360,000-employee workforce. The company has developed a robust learning program for people both inside and outside of the company interested in learning new technical skills.


Digital Robber Barons and Digital Vertical Integration

These Robber Barons leveraged vertical integration to create “economic moats” that locked out and blocked potential competitors. The term “economic moat”, popularized by Warren Buffett, refers to a business' ability to maintain competitive advantages in order to protect its long-term profits and market share from competing firms while charging monopoly-like prices to its customers and onerous terms to its suppliers. Just like a medieval castle, the moat serves to protect the riches of those inside the castle from outsiders. Andrew Carnegie is an example of a Robber Baron who used vertical integration to create economic moats for Carnegie Steel. Carnegie Steel (later U.S. Steel) became the dominant steel supplier in the U.S. through the vertical integration of the steel value chain process. Carnegie owned not only the steel mills that produced the different grades and types of steel, but Carnegie also owned the iron ore mines that was the main ingredient in steel production, coke/coal mines that powered the blast furnaces from which steel was produced, and the railroads and shipping that transported the iron ore and coke to the steel mills and the finished steel products to its customers


Building a secure hybrid cloud

If all your computing assets are stored in a single location which then experiences an extended power outage, phone service or internet outage, natural disaster, or terrorist attack, your business essentially grinds to a halt. Many larger organizations invest in constructing and maintaining multiple data centers for just that reason. For most small businesses, this added cost is beyond their capabilities. Cloud technology removes this challenge by placing the business continuity requirement entirely on the provider. Along the same lines of business continuity, is that because of its ubiquity, cloud provides businesses with a competitive advantage over companies that still rely on legacy on-premises hardware-based solutions. Case in point: I recently worked with a company who had one of their location’s phone lines go down. It took 3 days for 2 different phone companies to figure out whose fault it was and then finally fix the problem. During those 3 days, a busy office was completely down with no phone service whatsoever. This kind of service level might have been acceptable in 1992. However, in the 2020s that’s beyond unacceptable. A cloud communications provider with a guaranteed service-level agreement would have ensured that such a serious outage would never happen.


Testing in Production 101

To start, deploy your first feature to production with the default rule off for safety. This ensures that only the targeted users will have access to the feature. Next, run your automation scripts in production with targeted test users, as well as the regression suite to guarantee previously released features are not affected by your changes. With the feature flag off and only your targeted team members having access to the feature, you will officially be testing in production. This is the time to resolve any bugs and validate all proper functionality. It’s important to remember that because end users do not yet have access to your feature, they will not be impacted if anything does go wrong. After you’ve resolved the issues that appeared in your first test and you’re confident the feature will work properly, it’s time to use a canary release to open up the feature to 1% of your user base. The next days will be spent monitoring error logs and growing your confidence in the feature until you feel it’s appropriate to increase the percentage of users that can access your feature. Once you reach 100% of users and you know without a doubt that the feature works, it’s time to turn on the default rule for the feature.


Digital Twins: Bridging the Physical and Digital World

In short, a digital twin is the precise replica of the physical world preserved through updates on a real-time basis. It is used in virtual reality and 3D data and graphics to create virtual buildings and other models of product, service, system, process, and so on. According to the SAP Senior Vice President of IoT Thomas Kaiser, he says that this is “becoming a business imperative, covering the entire lifecycle of an asset or process and forming the foundation for connected products and services.” ... The concept of a digital twin has been around since 2002 but was shadowed by IoT. However, it has made a resurgence and, in 2017, it was part of Gartner’s Top 10 Strategic Technology Trends. It has made the system cost-effective to implement and become imperative in today’s business, combining virtual and physical worlds to enable analyses of data and monitoring systems. It also helps forestall a problem before it occurs, avoid interruption, advance new opportunities, and plan for the future with simulations. Digital twins enable real-world data for creating simulations for predicting the production process. It incorporates IoT Industry 4.0, Artificial Intelligence (AI), and software analytics to augment a better result.


Self-Service Security for Developers Is the DevSecOps Brass Ring

The ability for organizations to fold self-service security functionality into these internal platforms tends to be highly correlated to the degree to which security integration has been achieved across the software delivery life cycle. The survey asked respondents to pick which of the five phases of the life cycle where security is integrated: requirements, design, building, testing, and deployment. It found the ratio of organizations with two or more phases integrated has gone up from 63% last year to 70% this year. The ratio of organizations with complete integration now stands at 12%. As the report explains, the self-service offering of security and compliance validation is intertwined with the push for greater integration. Meanwhile, among those with three to four phases of development integrated with security, 42% offer self-service security and compliance validation. And 58% those that have achieved full security integration across all five phases say they provide self-service security. Companies that have fully integrated security are more than twice as likely to offer self-service security as firms with no security integration.



Quote for the day:

"When I finally got a management position, I found out how hard it is to lead and manage people." -- Guy Kawasaki

Daily Tech Digest - November 13, 2020

Manufacturing is becoming a major target for ransomware attacks

For cyber criminals, manufacturing makes a highly strategic target because in many cases these are operations that can't afford to be out of action for a long period of time, so they could be more likely to give in to the demands of the attackers and pay hundreds of thousands of dollars in bitcoin in exchange for getting the network back. "Manufacturing requires significant uptime in order to meet production and any attack that causes downtime can cost a lot of money. Thus, they may be more inclined to pay attackers," Selena Larson, intelligence analyst for Dragos, told ZDNet. "Additionally, manufacturing operations don't necessarily have the most robust cybersecurity operations and may make interesting targets of opportunity for adversaries," she added. The nature of manufacturing means industrial and networking assets are often exposed to the internet, providing avenues for hacking groups and ransomware gangs to gain access to the network via remote access technology such as remote desktop protocol (RDP) and VPN services or vulnerabilities in unpatched systems. As of October 2020, the company said there were at least 108 advisories containing 262 vulnerabilities impacting industrial equipment found in manufacturing environments during the course of this year alone.


Humanitarian data collection practices put migrants at risk

“Instead of helping people who face daily threats from unaccountable surveillance agencies – including activists, journalists and people just looking for better lives – this ‘aid’ risks doing the very opposite,” said PI advocacy director Edin Omanovic. To overcome the issues related to “surveillance humanitarianism”, the report recommends that all UN humanitarian and related bodies “adopt and implement mechanisms for sustained and meaningful participation and decision-making of migrants, refugees and stateless persons in the adoption, use and review of digital border technologies”. Specifically, it added that migrants, refugees and others should have access to mechanisms that allow them to hold bodies like the UNHCR directly accountable for violations of their human rights resulting from the use of digital technologies, and that technologies should be prohibited if it cannot be shown to meet equality and non-discrimination requirements. It also recommends that UN member states place “an immediate moratorium on the procurement, sale, transfer and use of surveillance technology, until robust human rights safeguards are in place to regulate such practices”. A separate report on border and migration “management” technologies published by European Digital Rights (EDRi), which was used to supplement the UN report ...


Machine Learning Testing: A Step to Perfection

Usually, software testing includes Unit tests, Regression tests and Integration tests. Moreover, there are certain rules that people follow: don’t merge the code before it passes all the tests, always test newly introduced blocks of code, when fixing bugs, write a test that captures the bug. Machine learning adds up more actions to your to-do list. You still need to follow ML’s best practices. Moreover, every ML model needs not only to be tested but evaluated. Your model should generalize well. This is not what we usually understand by testing, but evaluation is needed to make sure that the performance is satisfactory. ... First of all, you split the database into three non-overlapping sets. You use a training set to train the model. Then, to evaluate the performance of the model, you use two sets of data: Validation set - Having only a training set and a testing set is not enough if you do many rounds of hyper parameter-tuning (which is always). And that can result in over fitting. To avoid that, you can select a small validation data set to evaluate a model. Only after you get maximum accuracy on the validation set, you make the testing set come into the game; and Test set (or holdout set) - Your model might fit the training dataset perfectly well. ...


How The Future Of Deep Learning Could Resemble The Human Brain

For deep learning, the model training stage is very similar to the initial learning stage of humans. During early stages, the model experiences a mass intake of data, which creates a significant amount of information to mine for each decision and requires significant processing time and power to determine the action or answer. But as training occurs, neural connections become stronger with each learned action and adapt to support continuous learning. As each connection becomes stronger, redundancies are created and overlapping connections can be removed. This is why continuously restructuring and sparsifying deep learning models during training time, and not after training is complete, is necessary. After the training stage, the model has lost most of its plasticity and the connections cannot adapt to take over additional responsibility, so removing connections can result in decreased accuracy. Current methods such as the one unveiled in 2020 by MIT researchers where attempts are made to make the deep learning model smaller post-training phase have reportedly seen some success. However, if you prune in the earlier stages of training when the model is most receptive to restructuring and adapting, you can drastically improve results.


Quantum Computing: A Bubble Ready to Burst?

If there is a quantum bubble, it’s inflated both by the new flurry of Sycamore-type academic work and a simultaneous push from private corporations to develop real-world quantum applications, like avoiding traffic jams, as a form of competitive advantage. We’ve known about the advantages that quantum physics can offer computing since at least the 1980s, when Argonne physicist Paul Benioff described the first quantum mechanical model of a computer. But the allure of the technology seems to have just now bitten enterprising businesspeople from the tiniest of startups to the largest of conglomerates. “My personal opinion is there’s never been a more exciting time to be in quantum,” says William Hurley. Strangeworks, the startup he founded in 2018, serves as a sort of community hub for developers working on quantum algorithms. Hurley, a software systems analyst who has worked for both Apple and IBM, says that more than 10,000 developers have signed up to submit their algorithms and collaborate with others. Among the collaborators—Austin-based Strangeworks refers to them as “friends and allies”—is Bay Area startup Rigetti Computing, which supplies one of the three computers that Amazon Web Services customers can access to test out their quantum algorithms.


C++ programming language: How it became the invisible foundation for everything, and what's next

As of September 2020, C++ is the fourth most popular programming language globally behind C, Java and Python, and – according to the latest TIOBE index – is also the fastest growing. C++ is a general-purpose programming language favored by developers for its power and flexibility, which makes it ideal for operating systems, web browsers, search engines (including Google's), games, businesses applications and more. Stroustrup summarizes: "If you have a problem that requires efficient use of hardware and also to handle significant complexity, C++ is an obvious candidate. If you don't have both needs, either a low-level efficient language or a high-level wasteful language will do." Yet even with its widespread popularity, Stroustrup notes that it is difficult to pinpoint exactly where C++ is used, and for what. "A first estimate for both questions is 'everywhere'," he says. "In any large system, you typically find C++ in the lower-level and performance-critical parts. Such parts of a system are often not seen by end-users or even by developers of other parts of the system, so I sometimes refer to C++ as an invisible foundation of everything."


Cybercrime To Cost The World $10.5 Trillion Annually By 2025

Cybercrime has hit the U.S. so hard that in 2018 a supervisory special agent with the FBI who investigates cyber intrusions told The Wall Street Journal that every American citizen should expect that all of their data (personally identifiable information) has been stolen and is on the dark web — a part of the deep web — which is intentionally hidden and used to conceal and promote heinous activities. Some estimates put the size of the deep web (which is not indexed or accessible by search engines) at as much as 5,000 times larger than the surface web, and growing at a rate that defies quantification. The dark web is also where cybercriminals buy and sell malware, exploit kits, and cyberattack services, which they use to stirke victims — including businesses, governments, utilities, and essential service providers on U.S. soil. A cyberattack could potentially disable the economy of a city, state or our entire country. In his 2016 New York Times bestseller — Lights Out: A Cyberattack, A Nation Unprepared, Surviving the Aftermath — Ted Koppel reveals that a major cyberattack on America’s power grid is not only possible but likely, that it would be devastating, and that the U.S. is shockingly unprepared.


Role of FinTech in the post-COVID-19 world

As the global economy recovers from COVID-19, one particular area of focus for FinTech is financial inclusion. According to the World Bank, there are currently around 1.7 billion unbanked individuals worldwide, and FinTechs will be central to efforts to integrate these people into the global banking system. Doing so will help to mitigate the economic and social impact of the pandemic. According to Deloitte, FinTechs, in strategic partnerships with financial institutions, retailers and government sectors across jurisdictions, can help democratise financial services by providing basic financial services in a fair and transparent way to economically vulnerable populations. Digital finance is also expanding in other areas. Health concerns in the COVID-19 era have made physical cash payments less practical, opening the door to an increase in digital payments and e-wallets. Though cash use was predicted to decline in any case, COVID-19 has hurried that decline, due to concerns that handing over money can cause human to human transmission of the virus. According to a Mastercard survey looking at the implications of the coronavirus pandemic, 82 percent of respondents worldwide viewed contactless as the cleaner way to pay, and 74 percent said they will continue to use contactless payment post-pandemic.



DNS cache poisoning poised for a comeback: Sad DNS

Here's how it works: First, DNS is the internet's master address list. With it, instead of writing out an IPv4 address like "173.245.48.1," or an IPv6 address such as "2400:cb00:2048:1::c629:d7a2," one of Cloudflare's many addresses, you simply type in "http://www.cloudflare.com," DNS finds the right IP address for you, and you're on your way. With DNS cache poisoning, however, your DNS requests are intercepted and redirected to a poisoned DNS cache. This rogue cache gives your web browser or other internet application a malicious IP address. Instead of going to where you want to go, you're sent to a fake site. That forged website can then upload ransomware to your PC or grab your user name, password, and account numbers. In a word: Ouch! Modern defense measures -- such as randomizing both the DNS query ID and the DNS request source port, DNS-based Authentication of Named Entities (DANE), and Domain Name System Security Extensions (DNSSE) -- largely stopped DNS cache poisoning. These DNS security methods, however, have never been deployed enough, so DNS-based attacks still happen. Now, though researchers have found a side-channel attack that can be successfully used against the most popular DNS software stacks, SAD DNS. 


CIOs tasked to make healthcare infrastructure composable

The composable healthcare organization is a healthcare organization that can reconfigure its capabilities -- both its business and operating model -- at the pace of market change. We have lived in a world and in an industry where there's been stable business and operational models. If you're a provider organization or a payer organization or a life sciences company, those heritage business models have been pretty stable. That's in terms of how organizations think, their culture, the way their business is architected -- so the organizational structures, the way they collaborate, all the way down to the way we've architected technology. They've really done that in service of a relatively stable business and operating model. What we're marking here are three main points. On a very simple level it's this: Adaptability is more important than ever, adaptability is more possible than ever, adaptability can be done by the people who you and I are speaking to -- the people you're reporting for and the people we work with on the Gartner health team. The idea of adaptability is nothing new to CIOs, in general. If you go back to when many of today's CIOs were in high school or even in college, there was reusable code, object-oriented programming -- we've just gone through a decade-and-a-half of more data services and agile development. 



Quote for the day:

"If you genuinely want something, don't wait for it--teach yourself to be impatient." -- Gurbaksh Chahal

Daily Tech Digest - November 12, 2020

The Ever-Expanding List of C-Level Technology Positions

In decades past, it was relatively uncommon for IT leaders to be part of the top tier of executive management. Even those who held the title of chief information officer (CIO) often reported to someone other than the chief executive officer (CEO). But digital transformation has changed that. As enterprises seek new ways of doing business, CIOs have begun playing a bigger role in directing the overall strategy of the business. Several different surveys have found that more than half of CIOs now report to CEOs, and many CEOs list their CIOs as one of their most trusted advisors. ... However, while they might not be ascending to the top job, IT leaders are finding more opportunities to join the executive team. The twin trends of digital transformation and the rise of big data analytics has led many enterprises to create new C-level positions directly related to technology. In fact, some industry analysts have begun to wonder if organizations have created too many new C-level technology roles. Some are forecasting that in the years ahead enterprises might be re-vamping their org structure to cut back on these new C-level positions. But for now, IT leaders seem to have more opportunities to fill C-level roles than ever before.


Applying Lean and Accelerate to Deliver Value: QCon Plus Q&A

It is important to understand that delay degrades the economic value of what we deliver - there is a cost to delays, and it can be significant. Think about the loss of opportunity or revenue if a software product is delivered late, especially in a highly competitive market segment. Delays also slow down feedback, which makes it harder to adapt to new information. You can also incur significant risk of outages or customer turnover if features are delivered late. With this in mind, just as we spend so much time optimizing and tuning the latency and throughput of our software systems, we should spend time to optimize and tune the latency and throughput of our development process. It turns out when you look at the math and dynamics of product delivery pipelines, the biggest contributor to delay is letting queues back up. Unlike in manufacturing, these queues are invisible in software development, so it is important that we make an effort to make them visible, and then address them quickly and aggressively. Two powerful ways to reduce queues are limiting work in progress and keeping your batch sizes small.


Banking Trojan Can Spy on Over 150 Financial Apps

The Kaspersky researchers first came across the Ghimob Trojan in August while examining a Windows campaign related to another malware strain circulating in Brazil. "We believe this campaign could be related to the Guildma [Brazilian banking Trojan] threat actor for several reasons, but mainly because they share the same infrastructure," according to the report. "It is also important to note that the protocol used in the mobile version is very similar to that used for the Windows version." Unlike other types of Android-focused malware, the Ghimob Trojan does not disguise itself as a legitimate app that is hidden within the official Google Play Store. Instead, the fraudsters attempt to lure victims into installing a malicious file through a phishing or spam email that suggests that the recipient has some kind of debt, according to the report. The message includes an "informational" link for the victim to click on, which starts the malware delivery. The malicious link is usually disguised to appear as either a Google Defender, a Google Doc or a WhatsApp Updater, according to the report. If opened, it installs the Ghimob Trojan within the device. The malware's first step is to check for any emulators or debuggers which, if found, are terminated.


How to stress-test your business continuity management

“You really need to be in a position to mitigate against any potential risks both before a system is live, and afterwards, so there are no nasty surprises. End to end testing of every platform, both independently and in terms of its integration with the wider network of systems, is therefore critical. However this needs to be balanced against the need to deliver with speed and certainty – so strong automated testing should be seen as a standard component of your production systems. “This will usually be provided by an independent quality assurance specialist. At Expleo we actually automate this process for clients to account for the complexity and speed of the technology and release cycles. Automated testing not only safeguards quality, but also adds value by providing immediate speed and efficiency gains. “First, ML cuts through the testing workload and sieves the data at scale, surfacing the highest-priority test cases. Then, AI analyses this data in real-time, so we can respond to risks before they become issues. This is used as the basis for predictive analysis – so you can predict where risk is going to emerge and mitigate it in the most cost effective way.”


What's next for AI: Gary Marcus talks about the journey toward robust artificial intelligence

Marcus points out this is a really deep deficiency, and one that goes back to 1965. ELIZA, the first expert system, just matched keywords and talked to people about therapy. So there's not much progress, Marcus argues, certainly not exponential progress as people like Ray Kurzweil claim, except in narrow fields like playing chess. We still don't know how to make a general purpose system that could understand conversations, for example. The counter-argument to that is that we just need more data and bigger models (hence more compute, too). Marcus begs to differ, and points out that AI models have been growing, and consuming more and more data and compute, but the underlying issues remain. Recently, Geoff Hinton, one of the forefathers of deep learning, claimed that deep learning is going to be able to do everything. Marcus thinks the only way to make progress is to put together building blocks that are there already, but no current AI system combines. ... A connection to the world of classical AI. Marcus is not suggesting getting rid of deep learning, but using it in conjunction with some of the tools of classical AI. Classical AI is good at representing abstract knowledge, representing sentences or abstractions. The goal is to have hybrid systems that can use perceptual information.


Passage of California privacy act could spur similar new regulations in other states

The COVID-19 crisis has derailed a lot of legislative activity across the country, making it difficult to get a solid sense of where privacy initiatives are headed. “The challenge you're going to find is that post-pandemic most of the state legislatures said anything that's not COVID related is not being considered,” Stockburger says. After the pandemic recedes from its urgent priority status, many states could kick new legislative efforts into gear. “Next year, that's when you're going to see big new developments and introductions,” he says. ... Another question that remains is whether the federal government will step in to create a more consistent privacy law framework. In the past, Silicon Valley giants stood staunchly opposed to the stringent provisions of the CCPA and sought a national privacy law to preempt and water down the CCPA’s requirements. However, their resistance has weakened over the past several years. “At the federal level, there's just a real challenge in getting any type of omnibus legislative efforts pushed through,” Stockburger says. “That’s been a challenge since probably 2016 when the Democrats got whooped in the midterms, and since then, we've had divided Congress.”


5 Things We’ve Learned from Digital Transformation in the Last 5 Years

While mobile offerings may have been a luxury five years ago, they are now an indispensable channel. Many organizations previously viewed mobile services as a nice-to-have, or as an offering geared towards a younger generation of tech-savvy consumers. However, now that contactless operations are the norm, offerings that incorporate mobile capture and mobile onboarding are a must-have for meeting the needs of the new digital-first consumer. From check deposits to application submissions, mobile services can go a long way in providing convenience, accessibility and ease. Organizations that embrace mobile capabilities and seamlessly connect them with back-end systems are well-positioned to enhance the customer experience and improve customer retention. Five years ago, it wasn’t uncommon for an organization’s process discovery methods to be defined by one-on-one interviews, firsthand observations and manual analysis. It was typical for business leaders to map out processes via post-it notes — what used to be referred to as “walking the wall.” Now, however, organizations are turning to machine learning and predictive analytics to discover and analyze their processes in a more accurate way.


DDoS Protection for Workloads on AWS with GWLB & DefensePro VA

There are many ways to deploy DefensePro VA with AWS Gateway Load Balancer to achieve north-south and/or east-west inspection. AWS Gateway Load Balancer adheres to multiple deployment use cases and network architectures. The AWS Gateway Load Balancer provides the VPC Endpoint Service, which allows customers to mimic on-prem networking paradigms, such as hub-and-spoke, across different VPCs and accounts. Customers can create a VPC dedicated to DDoS inspection where a group of DefensePro appliances is deployed with a Gateway Load Balancer. By utilizing AWS Ingress Routing, customers have full control of traffic routing to and from the DDoS inspection VPC. The following network topology illustrates a simplified deployment of DefensePro VA in a dedicated DDoS inspection VPC. There are two VPCs: the Customer VPC, which is Internet-facing, and DDoS-Inspection VPC. The Customer VPC has two Availability Zones for high availability of applications instances. Each zone includes an AGWe (end-point service) that steers traffic to/from the Gateway Load Balancer located in the DDoS-Inspection VPC. A group of DefensePro VAs is deployed in the DDoS Inspection VPC, spanning two Availability Zones, for high availability.


Does Your Business Need a Digital Transformation?

Because a digital transformation inevitably involves new systems, processes, and skills, it can be daunting for many leaders and teams. Embracing new technology involves a willingness to disrupt current processes and to develop new ones. This can be uncomfortable and challenging, and it’s important for leaders to acknowledge that from the outset. For many businesses, a digital transformation means completely rethinking systems and processes in order to embed technology throughout them. From the start, leadership teams need to be willing to make these major changes in order to take advantage of new tools. ... Perhaps the most important thing you can do is to prepare your team. Whenever there are major changes, leaders should expect some pushback. It’s important to anticipate and proactively address this issue to ensure that your team is ready and supportive of upcoming changes. A simple way to prepare your team is by being transparent about the planning process, goals, and anticipated shifts. Involving them in the process as much as possible will lead to increased buy-in and engagement from all levels of your team.


Stop thinking of cybersecurity as a problem: Think of it as a game

Companies can’t afford large-scale cyberattacks at any time, but especially right now. The pandemic has caused consumers who may have lost significant income to be picky with their purchases and investments. Companies need to be focused on retaining customer relationships so that they’ll weather the pandemic, and a take-down of the network could undercut customer trust in unrecoverable ways. But many companies won’t take action. They may view their older systems as good enough to ride the wave to the other side of the pandemic, and once there, they’ll go back to what they had used before, unprepared for the next attack. They may get through, but nothing will have changed — things will not go back to how they were, and you will no longer be able to rely on systems that protected a pre-COVID world. Now, there’s an opportunity to huddle up, form a new strategy, and go on the offensive. The pandemic can be an opportunity for businesses to take a look at their vulnerabilities, map their attack surface, and take appropriate actions to secure and strengthen their systems.



Quote for the day:

"Leadership is familiar, but not well understood." -- Gerald Weinberg