Daily Tech Digest - November 17, 2020

SD-WAN needs a dose of AIOps to deliver automation

In some ways, SD-WAN exacerbates the troubleshooting problem. It adds a level of resiliency to the network via multi-path networking that can hide outages. This leads to a situation where the network operations dashboard can show everything is "green," but apps are performing poorly. Network performance issues have become glaringly obvious with the rise of video, and they are causing network engineers to constantly scramble to try and remediate issues. Here is where AI can make a difference. AI systems can ingest the massive amounts of data provided by network infrastructure (LAN, WLAN and WAN) to "see" things that even the savviest network engineer can't see. At one time, when networks were fairly simple and traffic volumes were lower, it was possible for a seasoned network professional to "know" a network and quickly find the root of problems through a combination of domain knowledge and rapid inspection of traffic. But not so today as the numbers of devices, applications and volume of information have skyrocketed. One of the big changes is that periodic polling data has been replaced by real-time streaming telemetry that increases data by an order of magnitude or more.


Ripe for digital disruption: Which industries are most at risk and why

The changing demographics favor workers who are much more open to gig work and who place greater trust in digital platforms to create marketplaces. This has opened the door to changes in typically cohesive industries, such as higher education. The increased demand for digital skills has led many students to decouple academic interest and professional credentialing. This will lead to an exodus from costlier schools in favor of boutique schools that cater to narrower interests. Students will earn digital credentials from specific, technology-heavy institutions like Lambda School in their early career, and pursue further growth and learning throughout their career from organizations such as Coursera or LinkedIn Learning. Generation Z has grown up with democratized value creation, like YouTube channels or Twitch streamers that organically found their base and built their audience using digital techniques. These new, digital entities can see the most valuable part of a business process and align themselves to those while sourcing out the other aspects with great velocity. Tesla, for example, has done away with its PR department and is relying on its outspoken CEO to directly message the market.


The seven elements of successful DDoS defence

Because multiple computers from a globally dispersed botnet “zombie army” of hijacked internet-connected devices are attempting to flood a server with fake traffic to knock it offline, DDoS attacks are already more destructive than Denial of Service (DoS) attacks perpetrated from one machine. However, in recent years we’ve monitored a disturbing trend: DDoS used as a smokescreen. The service disruption draws the IT team’s attention away from a separate and more sophisticated incursion, such as account takeover or phishing. The damage of just the DDoS can be bad enough. It takes a targeted website minutes to go down in a strike, but hours to recover. In fact, 91% of organisations have experienced downtime from a DDoS attack, with each hour of downtime costing an average of $300,000. Beyond the revenue loss, DDoS can erode customer trust, force businesses to spend large amounts in compensations, and cause long-term reputational damage; particularly if it leads to other breaches. ... A comprehensive defence is essential, but with attacks ranging from massive volumetric bombardments to sophisticated and persistent application layer threats, what are the most important elements of potential solutions to consider?


Breakdown of a Break-in: A Manufacturer's Ransomware Response

At the 2020 (ISC)² Security Congress, SCADAfence CEO Elad Ben-Meir took the virtual stage to share details of a targeted industrial ransomware attack against a large European manufacturer earlier this year. His discussion of how the attacker broke in, the collection of forensic evidence, and the incident response process offered valuable lessons to an audience of security practitioners. The firm learned of this attack late at night when several critical services stopped functioning or froze altogether. Its local IT team found ransom notes on multiple network devices and initially wanted to pay the attackers; however, after the adversaries raised their price, the company contacted SCADAfence's incident response team. ... Before it arrived on-site, the incident response team instructed the manufacturer to contain the threat to a specific area of the network and prevent the spread of infection, minimize or eliminate downtime of unaffected systems, and keep the evidence in an uncontaminated state. "The initial idea was to try to understand where this was coming from, what machines were infected and what machines those machines were connected to, and if there was the ability to propagate additionally from there," said Ben-Meir in his talk.


Sustainability: The growing issue of supply chain disruption

There is likely to be more disruption ahead as extreme weather events appear to be on the rise. According to McKinsey, climate disruptions to supply chains are going to become increasingly frequent and more severe. Kern said: “It’s a mathematical effect that the number of natural catastrophes has been increasing massively in recent years. If you look at Hurricanes Katrina, Harvey, Irma and Maria as well as the Japanese earthquake and the Thai floods you can see that we are getting loss events far above the previous average of around $50bn. We’re seeing nat cats causing losses up to $150bn of insured value, so as you can imagine this is a very big concern for us.” Baumann pointed out that as well as more extreme weather, other future trends could play a role. He said: “There are several drivers of disruption. The complexity of supply chains is increasing, and more complexity means more potential points of failure. Even simple goods can have as many as ten suppliers. That in turn adds to the risk that transportation and production may be disrupted.” At the same time, practices such as just-in-time delivery or lean manufacturing can also introduce risks, particularly when organisations are focused purely on reducing costs.


Figuring out programming for the cloud

The trick, says Rosoff, is to give the programmer enough of a language to express the authorization rule, but not so much freedom that they can break the entire application if they have a bug. How does one determine which language to use? Rosoff offers three decision criteria: Does the language allow me to express the complete breadth of programs I need to write? (In the case of authorization, does it let me express all of my authZ rules?); Is the language concise? (Is it fewer lines of code and easier to read and understand than the YAML equivalent?); Is the language safe? (Does it stop the programmer from introducing defects, even intentionally?). We still have a ways to go to make declarative languages the easy and obvious answer to infrastructure-as-code programming. One reason developers turn to imperative languages is that they have huge ecosystems built up around them with documentation, tooling, and more. Thus it’s easier to start with imperative languages, even if they’re not ideal for expressing authorization configurations in IaC. We also still have work to do to make the declarative languages themselves approachable for newbies. This is one reason Polar, for example, tries to borrow imperative syntax.


A Cloud-Native Architecture for a Digital Enterprise

Cloud-native applications are all about dynamism, and microservice architecture (MSA) is critical to accomplish this goal. MSA helps to divide and conquer by deploying smaller services focusing on well-defined scopes. These smaller services need to integrate with different software as a service (SaaS) endpoints, legacy applications, and other microservices to deliver business functionalities. While microservices expose their capabilities as simple APIs, ideally, consumers should access these as integrated, composite APIs to align with business requirements. A combination of API-led integration platform and cloud-native technologies helps to provide secured, managed, observed, and monetized APIs that are critical for a digital enterprise. The infrastructure and orchestration layers represent the same functionality that we discussed in the cloud-native reference architecture. Cloud Foundry, Mesos, Nomad, Kubernetes, Istio, Linkerd, and OpenPaaS are some examples of current industry-leading container orchestration platforms. Knative, AWS Lambda, Azure Functions, Google Functions, and Oracle Functions are a few examples of functions as a service platform (FaaS).


New streaming and digital media rules by Indian government rattles industry

So, what exactly does rule this portend? It's not entirely clear. To some who earn their bread and butter monitoring these industries, the prognosis is dire. Nikhil Pahwa, a digital rights activist and founder of prominent website MediaNama that writes about these industries said this to the Guardian: "The fear is that with the Ministry of Information and Broadcasting -- essentially India's Ministry of Truth -- now in a position to regulate online news and entertainment, we will see a greater exercise of government control and censorship." If this becomes reality it would wreck the plans of companies such as Netflix and Amazon that have seen their fortunes rise dramatically in the last few years with the spectacular boom of smartphones and cheap data, both goldmines that keep on giving. The COVID era has only added more fuel to this trend. Eager to capitalise on this nascent market, Netflix has already pumped $400 million into the country and amassed 2.5 million precious subscribers. Consulting outfit PwC predicts that India's media and entertainment industry will grow at a brisk 10.1% clip annually to reach $2.9 billion by 2024. 


Executive Perspective: Privacy Ops Meets DataOps

PrivacyOps is emerging because privacy considerations can no longer be an afterthought in an organization’s software development lifecycle -- they need to be tightly integrated. There is pressure on organizations to prove they are taking responsibility for personal data and acting in compliance with regulations, and it’s only going to increase. The real opportunity that the emergence of PrivacyOps presents is bringing security and privacy processes together, and standardizing best practices that need to be implemented across organizations. It’s far too easy for engineering, analytics, and compliance teams to talk over each other. Bringing these domains together through software will help to set expectations across the industry about privatizing data assets. Techniques such as k-anonymization, for example, are practiced by some of the best teams in healthcare, but they are hardly commonplace, despite being relatively easy to implement. To deliver compliant analytics, you need data engineers that can reliably ship the data from place to place, while implementing the appropriate transformations. However, what actually needs to be done is often not very clear to the engineering team. Data scientists want as much data as possible; compliance teams are pushing to minimize the data footprint. Regulations are in flux and imprecise.


2021 predictions for the Everywhere Enterprise

While people will eventually return to the office, they won’t do so full-time, and they won’t return in droves. This shift will close the circle on a long trend that has been building since the mid-2000s: the dissolution of the network perimeter. The network and the devices that defined its perimeter will become even less special from a cybersecurity standpoint. ... Happy, productive workers are even more important during a pandemic. Especially as on average, employees are working three hours longer since the pandemic started, disrupting the work-life balance. It’s up to employers to focus on the user experience and make workers’ lives as easy as possible. When the COVID-19 lockdown began, companies coped by expanding their remote VPN usage. That got them through the immediate crisis, but it was far from ideal. On-premises VPN appliances suffered a capacity crunch as they struggled to scale, creating performance issues, and users found themselves dealing with cumbersome VPN clients and log-ins. It worked for a few months, but as employees settle in to continue working from home in 2021, IT departments must concentrate on building a better remote user experience.



Quote for the day:

"At first dreams seem impossible, then improbable, then inevitable." -- Christopher Reeve

Daily Tech Digest - November 16, 2020

System brings deep learning to “internet of things” devices

To run that tiny neural network, a microcontroller also needs a lean inference engine. A typical inference engine carries some dead weight — instructions for tasks it may rarely run. The extra code poses no problem for a laptop or smartphone, but it could easily overwhelm a microcontroller. “It doesn’t have off-chip memory, and it doesn’t have a disk,” says Han. “Everything put together is just one megabyte of flash, so we have to really carefully manage such a small resource.” Cue TinyEngine. The researchers developed their inference engine in conjunction with TinyNAS. TinyEngine generates the essential code necessary to run TinyNAS’ customized neural network. Any deadweight code is discarded, which cuts down on compile-time. “We keep only what we need,” says Han. “And since we designed the neural network, we know exactly what we need. That’s the advantage of system-algorithm codesign.” In the group’s tests of TinyEngine, the size of the compiled binary code was between 1.9 and five times smaller than comparable microcontroller inference engines from Google and ARM. TinyEngine also contains innovations that reduce runtime, including in-place depth-wise convolution, which cuts peak memory usage nearly in half. After codesigning TinyNAS and TinyEngine, Han’s team put MCUNet to the test.


Beyond the Database, and Beyond the Stream Processor: What's the Next Step for Data Management?

The breadth of database systems available today is staggering. Something like Cassandra lets us store a huge amount of data for the amount of memory the database is allocated; Elasticsearch is different, providing a rich, interactive query model; Neo4j lets us query the relationship between entities, not just the entities themselves; things like Oracle or PostgreSQL are workhorse databases that can morph to different types of use case. Each of these platforms has slightly different capabilities that make it more appropriate to a certain use case but at a high level, they’re all similar. In all cases, we ask a question and wait for an answer. This hints at an important assumption all databases make: data is passive. It sits there in the database, waiting for us to do something. This makes a lot of sense: the database, as a piece of software, is a tool designed to help us humans — whether it's you or me, a credit officer, or whoever — interact with data.  But if there's no user interface waiting, if there's no one clicking buttons and expecting things to happen, does it have to be synchronous? In a world where software is increasingly talking to other software, the answer is: probably not.


Data warehousing workloads at data lake economics with lakehouse architecture

Data lakes in the cloud have high durability, low cost, and unbounded scale, and they provide good support for the data science and machine learning use cases that many enterprises prioritize today. But, all the traditional analytics use cases still exist. Therefore, customers generally have, and pay for, two copies of their data, and they spend a lot of time engineering processes to keep them in sync. This has a knock-on effect of slowing down decision making, because analysts and line-of-business teams only have access to data that’s been sent to the data warehouse rather than the freshest, most complete data in the data lake. ... The complexity from intertwined data lakes and data warehouses is not desirable, and our customers have told us that they want to be able to consolidate and simplify their data architecture. Advanced analytics and machine learning on unstructured and large-scale data are one of the most strategic priorities for enterprises today, – and the growth of unstructured data is going to increase exponentially – therefore it makes sense for customers to think about positioning their data lake as the center of data infrastructure. However, for this to be achievable, the data lake needs a way to adopt the strengths of data warehouses.


What to Learn to Become a Data Scientist in 2021

Apache Airflow, an open source workflow management tool, is rapidly being adopted by many businesses for the management of ETL processes and machine learning pipelines. Many large tech companies such as Google and Slack are using it and Google even built their cloud composer tool on top of this project. I am noticing Airflow being mentioned more and more often as a desirable skill for data scientists on job adverts. As mentioned at the beginning of this article I believe it will become more important for data scientists to be able to build and manage their own data pipelines for analytics and machine learning. The growing popularity of Airflow is likely to continue at least in the short term, and as an open source tool, is definitely something that every budding data scientist should at learn. ... Data science code is traditionally messy, not always well tested and lacking in adherence to styling conventions. This is fine for initial data exploration and quick analysis but when it comes to putting machine learning models into production then a data scientist will need to have a good understanding of software engineering principles. If you are planning to work as a data scientist it is likely that you will either be putting models into production yourself or at least be involved heavily in the process.


WhatsApp Pay: Game changer with new risks

The payment instruction itself is a message to the partner bank, which then triggers a normal UPI transaction from the customer’s designated UPI bank to the destination partner bank through the National Payments Corporation of India (NPCI). The destination partner bank forwards the payment to the addressee’s default UPI bank registered with WhatsApp. A confirmation of credit is also sent through WhatsApp and reaches the message box of the recipient. It is possible that at either end, the WhatsApp partner bank may not be the customer’s bank. Hence, there may be the involvement of four banks, the NPCI and WhatsApp in completing the transaction. As far as the user is concerned, the system is managed by WhatsApp and none of the other players is visible. Though WhatsApp is not licensed to undertake UPI transactions directly, it engages the services of its partner banks to initiate the transaction. As these partner banks are not bankers for the customers, they engage two more banks to assist them. Finally, NPCI acts as the agent of the two banks through which the money actually passes through to the right bank. Thus, there is a chain of principal agent transaction and the roles of the customer, WhatsApp, banks, etc., need to be clarified. 


New Circuit Compression Technique Could Deliver Real-World Quantum Computers Years Ahead of Schedule

“By compressing quantum circuits, we could reduce the size of the quantum computer and its runtime, which in turn lessens the requirement for error protection,” said Michael Hanks, a researcher at NII and one of the authors of a paper, published on November 11, 2020, in Physical Review X. Large-scale quantum computer architectures depend on an error correction code to function properly, the most commonly used of which is surface code and its variants. The researchers focused on the circuit compression of one of these variants: the 3D-topological code. This code behaves particularly well for distributed quantum computer approaches and has wide applicability to different varieties of hardware. In the 3D-topological code, quantum circuits look like interlacing tubes or pipes, and are commonly called “braided circuits. The 3D diagrams of braided circuits can be manipulated to compress and thus reduce the volume they occupy. Until now, the challenge has been that such “pipe manipulation” is performed in an ad-hoc fashion. Moreover, there have only been partial rules for how to do this. “Previous compression approaches cannot guarantee whether the resulting quantum circuit is correct,” said co-author Marta Estarellas, a researcher at NII.


Microsoft Warns: A Strong Password Doesn’t Work, Neither Does Typical MFA 

“Remember that all your attacker cares about is stealing passwords...That’s a key difference between hypothetical and practical security.” — Microsoft’s Alex Weinert In other words, the bad guys will do whatever is necessary to steal your password and a strong password isn’t an obstacle when criminals have a lot of time and a lot of tools at their disposal. ... MFA based on phones, aka publicly switched telephone networks or PSTN, is not secure, according to Weinert. (What is typical MFA? It’s when, for example, a bank sends you a verification code via a text message.) “I believe they’re the least secure of the MFA methods available today,” Weinert wrote in a blog. “When SMS (texting) and voice protocols were developed, they were designed without encryption...What this means is that signals can be intercepted by anyone who can get access to the switching network or within the radio range of a device,” Weinert wrote. Solution: use app-based authentication. For example, Microsoft Authenticator or Google Authenticator. An app is safer because it doesn’t rely on your carrier. The codes are in the app itself and expire quickly.


Defining data protection standards could be a hot topic in state legislation in 2021

Once the immediacy of both the pandemic dissipates and the political heat cools, cybersecurity issues will likely surface again in new or revived legislation in many states, even if weaved throughout other related matters. It’s difficult to separate cybersecurity per se from adjoining issues such as data privacy, which has generally been the biggest topic to involve cybersecurity issues at the state level over the past four years. “You really don’t have this plethora of state cybersecurity laws that would be independent of their privacy law brethren,” Tantleff said. According to the National Conference of State Legislatures, at least 38 states, along with Washington, DC, and Puerto Rico introduced or considered more than 280 bills or resolutions that deal significantly with cybersecurity as of September 2020. Setting aside privacy and some grid security funding issues, there are two categories of cybersecurity legislative issues at the state level to watch during 2021. The first and most important is spelling out more clearly what organizations need to meet security and privacy regulations. The second is whether states will pick up election security legislation left over from the 2020 sessions.


The Case for Combining Next Generation Tech with Human Oversight

Human error is the main cause of security breaches, wrong data interpretation, mistaken insights, and a variety of other damning experiences the insights industry has had to wade through ever since its conception. Zooming out to take a wider look, human error is the cause of mistaken elections, aviation accidents, cybersecurity issues, etc. but also scientific breakthroughs across the world. While some mistakes yield true results, most have dangerous consequences that could have been avoided if we were more careful. To err is human, but in an industry where mistakes have real-world consequences, to err is to potentially cost a business it’s life. If we stick with the artificial intelligence and automation example, automated processes with next generation technology are the most poignant example of humans trying to make up for their mistakes and can help minimise human error at all stages ... The main benefit of combining human oversight with this next generation technology, is that we can catch and fix any bugs that arise before they harm the research process and projects that rely on said technology. But we need to be wary that humans cannot catch every mistake, and when one slips through that is when oversight takes on a whole new, disappointing meaning.


Important Considerations for Pushing AI to the Edge

The decision on where to train and deploy AI models can be determined by balancing considerations across six vectors: scalability, latency, autonomy, bandwidth, security, and privacy. In terms of scalability, in a perfect world, we’d just run all AI workloads in the cloud where compute is centralized and readily scalable. However, the benefits of centralization must be balanced out with the remaining factors that tend to drive decentralization. For example, if you depend on edge AI for latency-critical use cases and for which autonomy is a must, you would never make a decision to deploy a vehicle’s airbag from the cloud when milliseconds matter, regardless of how fast and reliable your broadband network may be under normal circumstances. As a general rule, latency-critical applications will leverage edge AI close to the process, running at the Smart and Constrained Device Edges as defined in the paper. Meanwhile, latency-sensitive applications will often take advantage of higher tiers at the Service Provider Edge and in the cloud because of the scale factor. In terms of bandwidth consumption, the deployment location of AI solutions spanning the User and Service Provider Edges will be based on a balance of the cost of bandwidth, the capabilities of devices involved and the benefits of centralization for scalability.



Quote for the day:

"If you want to do a few small things right, do them yourself. If you want to do great things and make a big impact, learn to delegate." -- John C. Maxwell

Daily Tech Digest - November 14, 2020

Data Scientist vs Business Analyst. Here’s the Difference.

Perhaps the biggest similarity of Business Analyst to Data Scientist is the words itself to describe the role. A Data Scientist is expected to perform business analytics in their role as it is essentially what dictates their Data Science goals. A Business Analyst can expect to focus not on Machine Learning algorithms to solve business problems, but instead on surfacing anomalies, shifts and trends, and key points of interest for a business. ... Of course, there are some key differences between these two roles. One of the biggest differences is the use of Machine Learning for Data Scientists only. Another difference is that a Business Analyst can expect to communicate more to stakeholders than a Data Scientist would (sometimes Data Scientist work can be more heads down and not involve as many meetings). Here is a summary of the differences you can expect to find between these positions. ... These two roles share goals with one another. Each requires a deep dive into data with similar tools as well. The process of communication is similar, too — working with stakeholders from the company to go over the business problem, solution, results, and impact. Here is a summary of the key similarities between a Data Scientist and a Business Analyst.


CISA Director Expects to Be Fired Following Secure Election

US officials delivered a statement emphasizing the security of this year's election as news of these firings began to unfold. Members of the Election Infrastructure Government Coordinating Council (GCC) Executive Committee and the Election Infrastructure Sector Coordinating Council (SCC) say this election "was the most secure in American history." Across the country, they add, officials are reviewing the election process, and states with close calls will recount ballots. "This is an added benefit for security and resilience," they wrote. "This process allows for the identification and correction of any mistakes or errors. There is no evidence that any voting system deleted or lost votes, changed votes, or was in any way compromised." Security measures included pre-election testing, state certification of voting equipment, and the US Election Assistance Commission's (EAC) certification of voting equipment contribute to confidence in voting systems used in 2020, they said. Officials acknowledged the "many unfounded claims and opportunities for misinformation" about the election process and emphasize they have the "utmost confidence" in the election's security and integrity.


Security Awareness: Preventing Another Dark Web Horror Story

Our research from last year has already revealed that 1 in 4 people would be willing to pay to get their private information taken down from the dark web – and this number jumps to 50% for those who have experienced a hack. While only 13% have been able to confirm whether a company with which they’ve interacted has been involved in a breach, the reality is it’s much more likely than you’d think – since 2013, over 9.7 billion data records have been lost or stolen, and this number is only rising. Most of us would have no way of knowing whether our information is up for sale online. However, solutions now exist which proactively check for email addresses, usernames and other exposed credentials against third-party databases, alerting users should any leaked information be found. ...  Detection is undoubtedly pivotal in keeping ahead of fraudsters, but the foundations begin with awareness. The majority of breaches take place as a result of simple mistakes which can be easily addressed – using your Facebook password at work or failing to change the default settings of connected devices. But at the same time, businesses must stress the importance of being cyber-aware and foster a culture of security awareness throughout the organisation.


14 Finance Specialists Share Their Largest Fintech Predictions For 2021

There can be extra “bank in a box” tech layers between fintech and banks to allow spinning up partnerships on a sooner timeline. I additionally see extra back-end firms to automate important compliance capabilities akin to Know Your Buyer and regulatory change administration. I additionally assume we are going to see much more “regular” firms providing monetary providers in addition to growing consolidation amongst fintech firms. – Jeanette Fast... An enormous development that might be seen is a renewed want for monetary literacy. Covid-19 compelled everybody to consider each their long- and short-term monetary outlooks. What now we have seen within the auto refinancing sector is that individuals don’t even know you possibly can refinance a car. You’ll discover customers who need to sharpen their funds and firms that can be making an attempt to achieve and educate them. – Tom Holgate, ... The rise of insurance coverage tech will revolutionize the medical insurance trade, with improvements starting from digital well being information to monitoring health. The rise of good contracts offers insurance coverage firms a solution to replace their infrastructure and minimize long-term prices whereas offering shoppers with superior service. – Joseph Safina


How to Keep Up With Big Tech's Hiring Spree

If you’re realizing you need more tech skills to handle the new digital demands of your industry, look first at your existing workforce. Instead of spending time and money on hiring, look for ways to upskill employees interested in a more technical career path and have demonstrated an aptitude for learning. For example, someone in an administrative role who has quickly adapted to remote work might be a good candidate for a scrum master or project management role. If you don’t have the ability to train employees in-house, consider a partnership. ... Hiring, in general, is starting to pick up again. When the pandemic finally subsides and companies begin hiring in full force, most will be looking for talent in the same places. Instead of sourcing recent college grads, look for graduates from coding boot camps and other alternative skilling programs, or target self-taught learners. This crisis has demonstrated that online learning isn’t just possible; it’s a critical part of today’s young people’s development. The talent acquisition team at IBM has made a point to target so-called “new collar” workers to bolster its 360,000-employee workforce. The company has developed a robust learning program for people both inside and outside of the company interested in learning new technical skills.


Digital Robber Barons and Digital Vertical Integration

These Robber Barons leveraged vertical integration to create “economic moats” that locked out and blocked potential competitors. The term “economic moat”, popularized by Warren Buffett, refers to a business' ability to maintain competitive advantages in order to protect its long-term profits and market share from competing firms while charging monopoly-like prices to its customers and onerous terms to its suppliers. Just like a medieval castle, the moat serves to protect the riches of those inside the castle from outsiders. Andrew Carnegie is an example of a Robber Baron who used vertical integration to create economic moats for Carnegie Steel. Carnegie Steel (later U.S. Steel) became the dominant steel supplier in the U.S. through the vertical integration of the steel value chain process. Carnegie owned not only the steel mills that produced the different grades and types of steel, but Carnegie also owned the iron ore mines that was the main ingredient in steel production, coke/coal mines that powered the blast furnaces from which steel was produced, and the railroads and shipping that transported the iron ore and coke to the steel mills and the finished steel products to its customers


Building a secure hybrid cloud

If all your computing assets are stored in a single location which then experiences an extended power outage, phone service or internet outage, natural disaster, or terrorist attack, your business essentially grinds to a halt. Many larger organizations invest in constructing and maintaining multiple data centers for just that reason. For most small businesses, this added cost is beyond their capabilities. Cloud technology removes this challenge by placing the business continuity requirement entirely on the provider. Along the same lines of business continuity, is that because of its ubiquity, cloud provides businesses with a competitive advantage over companies that still rely on legacy on-premises hardware-based solutions. Case in point: I recently worked with a company who had one of their location’s phone lines go down. It took 3 days for 2 different phone companies to figure out whose fault it was and then finally fix the problem. During those 3 days, a busy office was completely down with no phone service whatsoever. This kind of service level might have been acceptable in 1992. However, in the 2020s that’s beyond unacceptable. A cloud communications provider with a guaranteed service-level agreement would have ensured that such a serious outage would never happen.


Testing in Production 101

To start, deploy your first feature to production with the default rule off for safety. This ensures that only the targeted users will have access to the feature. Next, run your automation scripts in production with targeted test users, as well as the regression suite to guarantee previously released features are not affected by your changes. With the feature flag off and only your targeted team members having access to the feature, you will officially be testing in production. This is the time to resolve any bugs and validate all proper functionality. It’s important to remember that because end users do not yet have access to your feature, they will not be impacted if anything does go wrong. After you’ve resolved the issues that appeared in your first test and you’re confident the feature will work properly, it’s time to use a canary release to open up the feature to 1% of your user base. The next days will be spent monitoring error logs and growing your confidence in the feature until you feel it’s appropriate to increase the percentage of users that can access your feature. Once you reach 100% of users and you know without a doubt that the feature works, it’s time to turn on the default rule for the feature.


Digital Twins: Bridging the Physical and Digital World

In short, a digital twin is the precise replica of the physical world preserved through updates on a real-time basis. It is used in virtual reality and 3D data and graphics to create virtual buildings and other models of product, service, system, process, and so on. According to the SAP Senior Vice President of IoT Thomas Kaiser, he says that this is “becoming a business imperative, covering the entire lifecycle of an asset or process and forming the foundation for connected products and services.” ... The concept of a digital twin has been around since 2002 but was shadowed by IoT. However, it has made a resurgence and, in 2017, it was part of Gartner’s Top 10 Strategic Technology Trends. It has made the system cost-effective to implement and become imperative in today’s business, combining virtual and physical worlds to enable analyses of data and monitoring systems. It also helps forestall a problem before it occurs, avoid interruption, advance new opportunities, and plan for the future with simulations. Digital twins enable real-world data for creating simulations for predicting the production process. It incorporates IoT Industry 4.0, Artificial Intelligence (AI), and software analytics to augment a better result.


Self-Service Security for Developers Is the DevSecOps Brass Ring

The ability for organizations to fold self-service security functionality into these internal platforms tends to be highly correlated to the degree to which security integration has been achieved across the software delivery life cycle. The survey asked respondents to pick which of the five phases of the life cycle where security is integrated: requirements, design, building, testing, and deployment. It found the ratio of organizations with two or more phases integrated has gone up from 63% last year to 70% this year. The ratio of organizations with complete integration now stands at 12%. As the report explains, the self-service offering of security and compliance validation is intertwined with the push for greater integration. Meanwhile, among those with three to four phases of development integrated with security, 42% offer self-service security and compliance validation. And 58% those that have achieved full security integration across all five phases say they provide self-service security. Companies that have fully integrated security are more than twice as likely to offer self-service security as firms with no security integration.



Quote for the day:

"When I finally got a management position, I found out how hard it is to lead and manage people." -- Guy Kawasaki

Daily Tech Digest - November 13, 2020

Manufacturing is becoming a major target for ransomware attacks

For cyber criminals, manufacturing makes a highly strategic target because in many cases these are operations that can't afford to be out of action for a long period of time, so they could be more likely to give in to the demands of the attackers and pay hundreds of thousands of dollars in bitcoin in exchange for getting the network back. "Manufacturing requires significant uptime in order to meet production and any attack that causes downtime can cost a lot of money. Thus, they may be more inclined to pay attackers," Selena Larson, intelligence analyst for Dragos, told ZDNet. "Additionally, manufacturing operations don't necessarily have the most robust cybersecurity operations and may make interesting targets of opportunity for adversaries," she added. The nature of manufacturing means industrial and networking assets are often exposed to the internet, providing avenues for hacking groups and ransomware gangs to gain access to the network via remote access technology such as remote desktop protocol (RDP) and VPN services or vulnerabilities in unpatched systems. As of October 2020, the company said there were at least 108 advisories containing 262 vulnerabilities impacting industrial equipment found in manufacturing environments during the course of this year alone.


Humanitarian data collection practices put migrants at risk

“Instead of helping people who face daily threats from unaccountable surveillance agencies – including activists, journalists and people just looking for better lives – this ‘aid’ risks doing the very opposite,” said PI advocacy director Edin Omanovic. To overcome the issues related to “surveillance humanitarianism”, the report recommends that all UN humanitarian and related bodies “adopt and implement mechanisms for sustained and meaningful participation and decision-making of migrants, refugees and stateless persons in the adoption, use and review of digital border technologies”. Specifically, it added that migrants, refugees and others should have access to mechanisms that allow them to hold bodies like the UNHCR directly accountable for violations of their human rights resulting from the use of digital technologies, and that technologies should be prohibited if it cannot be shown to meet equality and non-discrimination requirements. It also recommends that UN member states place “an immediate moratorium on the procurement, sale, transfer and use of surveillance technology, until robust human rights safeguards are in place to regulate such practices”. A separate report on border and migration “management” technologies published by European Digital Rights (EDRi), which was used to supplement the UN report ...


Machine Learning Testing: A Step to Perfection

Usually, software testing includes Unit tests, Regression tests and Integration tests. Moreover, there are certain rules that people follow: don’t merge the code before it passes all the tests, always test newly introduced blocks of code, when fixing bugs, write a test that captures the bug. Machine learning adds up more actions to your to-do list. You still need to follow ML’s best practices. Moreover, every ML model needs not only to be tested but evaluated. Your model should generalize well. This is not what we usually understand by testing, but evaluation is needed to make sure that the performance is satisfactory. ... First of all, you split the database into three non-overlapping sets. You use a training set to train the model. Then, to evaluate the performance of the model, you use two sets of data: Validation set - Having only a training set and a testing set is not enough if you do many rounds of hyper parameter-tuning (which is always). And that can result in over fitting. To avoid that, you can select a small validation data set to evaluate a model. Only after you get maximum accuracy on the validation set, you make the testing set come into the game; and Test set (or holdout set) - Your model might fit the training dataset perfectly well. ...


How The Future Of Deep Learning Could Resemble The Human Brain

For deep learning, the model training stage is very similar to the initial learning stage of humans. During early stages, the model experiences a mass intake of data, which creates a significant amount of information to mine for each decision and requires significant processing time and power to determine the action or answer. But as training occurs, neural connections become stronger with each learned action and adapt to support continuous learning. As each connection becomes stronger, redundancies are created and overlapping connections can be removed. This is why continuously restructuring and sparsifying deep learning models during training time, and not after training is complete, is necessary. After the training stage, the model has lost most of its plasticity and the connections cannot adapt to take over additional responsibility, so removing connections can result in decreased accuracy. Current methods such as the one unveiled in 2020 by MIT researchers where attempts are made to make the deep learning model smaller post-training phase have reportedly seen some success. However, if you prune in the earlier stages of training when the model is most receptive to restructuring and adapting, you can drastically improve results.


Quantum Computing: A Bubble Ready to Burst?

If there is a quantum bubble, it’s inflated both by the new flurry of Sycamore-type academic work and a simultaneous push from private corporations to develop real-world quantum applications, like avoiding traffic jams, as a form of competitive advantage. We’ve known about the advantages that quantum physics can offer computing since at least the 1980s, when Argonne physicist Paul Benioff described the first quantum mechanical model of a computer. But the allure of the technology seems to have just now bitten enterprising businesspeople from the tiniest of startups to the largest of conglomerates. “My personal opinion is there’s never been a more exciting time to be in quantum,” says William Hurley. Strangeworks, the startup he founded in 2018, serves as a sort of community hub for developers working on quantum algorithms. Hurley, a software systems analyst who has worked for both Apple and IBM, says that more than 10,000 developers have signed up to submit their algorithms and collaborate with others. Among the collaborators—Austin-based Strangeworks refers to them as “friends and allies”—is Bay Area startup Rigetti Computing, which supplies one of the three computers that Amazon Web Services customers can access to test out their quantum algorithms.


C++ programming language: How it became the invisible foundation for everything, and what's next

As of September 2020, C++ is the fourth most popular programming language globally behind C, Java and Python, and – according to the latest TIOBE index – is also the fastest growing. C++ is a general-purpose programming language favored by developers for its power and flexibility, which makes it ideal for operating systems, web browsers, search engines (including Google's), games, businesses applications and more. Stroustrup summarizes: "If you have a problem that requires efficient use of hardware and also to handle significant complexity, C++ is an obvious candidate. If you don't have both needs, either a low-level efficient language or a high-level wasteful language will do." Yet even with its widespread popularity, Stroustrup notes that it is difficult to pinpoint exactly where C++ is used, and for what. "A first estimate for both questions is 'everywhere'," he says. "In any large system, you typically find C++ in the lower-level and performance-critical parts. Such parts of a system are often not seen by end-users or even by developers of other parts of the system, so I sometimes refer to C++ as an invisible foundation of everything."


Cybercrime To Cost The World $10.5 Trillion Annually By 2025

Cybercrime has hit the U.S. so hard that in 2018 a supervisory special agent with the FBI who investigates cyber intrusions told The Wall Street Journal that every American citizen should expect that all of their data (personally identifiable information) has been stolen and is on the dark web — a part of the deep web — which is intentionally hidden and used to conceal and promote heinous activities. Some estimates put the size of the deep web (which is not indexed or accessible by search engines) at as much as 5,000 times larger than the surface web, and growing at a rate that defies quantification. The dark web is also where cybercriminals buy and sell malware, exploit kits, and cyberattack services, which they use to stirke victims — including businesses, governments, utilities, and essential service providers on U.S. soil. A cyberattack could potentially disable the economy of a city, state or our entire country. In his 2016 New York Times bestseller — Lights Out: A Cyberattack, A Nation Unprepared, Surviving the Aftermath — Ted Koppel reveals that a major cyberattack on America’s power grid is not only possible but likely, that it would be devastating, and that the U.S. is shockingly unprepared.


Role of FinTech in the post-COVID-19 world

As the global economy recovers from COVID-19, one particular area of focus for FinTech is financial inclusion. According to the World Bank, there are currently around 1.7 billion unbanked individuals worldwide, and FinTechs will be central to efforts to integrate these people into the global banking system. Doing so will help to mitigate the economic and social impact of the pandemic. According to Deloitte, FinTechs, in strategic partnerships with financial institutions, retailers and government sectors across jurisdictions, can help democratise financial services by providing basic financial services in a fair and transparent way to economically vulnerable populations. Digital finance is also expanding in other areas. Health concerns in the COVID-19 era have made physical cash payments less practical, opening the door to an increase in digital payments and e-wallets. Though cash use was predicted to decline in any case, COVID-19 has hurried that decline, due to concerns that handing over money can cause human to human transmission of the virus. According to a Mastercard survey looking at the implications of the coronavirus pandemic, 82 percent of respondents worldwide viewed contactless as the cleaner way to pay, and 74 percent said they will continue to use contactless payment post-pandemic.



DNS cache poisoning poised for a comeback: Sad DNS

Here's how it works: First, DNS is the internet's master address list. With it, instead of writing out an IPv4 address like "173.245.48.1," or an IPv6 address such as "2400:cb00:2048:1::c629:d7a2," one of Cloudflare's many addresses, you simply type in "http://www.cloudflare.com," DNS finds the right IP address for you, and you're on your way. With DNS cache poisoning, however, your DNS requests are intercepted and redirected to a poisoned DNS cache. This rogue cache gives your web browser or other internet application a malicious IP address. Instead of going to where you want to go, you're sent to a fake site. That forged website can then upload ransomware to your PC or grab your user name, password, and account numbers. In a word: Ouch! Modern defense measures -- such as randomizing both the DNS query ID and the DNS request source port, DNS-based Authentication of Named Entities (DANE), and Domain Name System Security Extensions (DNSSE) -- largely stopped DNS cache poisoning. These DNS security methods, however, have never been deployed enough, so DNS-based attacks still happen. Now, though researchers have found a side-channel attack that can be successfully used against the most popular DNS software stacks, SAD DNS. 


CIOs tasked to make healthcare infrastructure composable

The composable healthcare organization is a healthcare organization that can reconfigure its capabilities -- both its business and operating model -- at the pace of market change. We have lived in a world and in an industry where there's been stable business and operational models. If you're a provider organization or a payer organization or a life sciences company, those heritage business models have been pretty stable. That's in terms of how organizations think, their culture, the way their business is architected -- so the organizational structures, the way they collaborate, all the way down to the way we've architected technology. They've really done that in service of a relatively stable business and operating model. What we're marking here are three main points. On a very simple level it's this: Adaptability is more important than ever, adaptability is more possible than ever, adaptability can be done by the people who you and I are speaking to -- the people you're reporting for and the people we work with on the Gartner health team. The idea of adaptability is nothing new to CIOs, in general. If you go back to when many of today's CIOs were in high school or even in college, there was reusable code, object-oriented programming -- we've just gone through a decade-and-a-half of more data services and agile development. 



Quote for the day:

"If you genuinely want something, don't wait for it--teach yourself to be impatient." -- Gurbaksh Chahal

Daily Tech Digest - November 12, 2020

The Ever-Expanding List of C-Level Technology Positions

In decades past, it was relatively uncommon for IT leaders to be part of the top tier of executive management. Even those who held the title of chief information officer (CIO) often reported to someone other than the chief executive officer (CEO). But digital transformation has changed that. As enterprises seek new ways of doing business, CIOs have begun playing a bigger role in directing the overall strategy of the business. Several different surveys have found that more than half of CIOs now report to CEOs, and many CEOs list their CIOs as one of their most trusted advisors. ... However, while they might not be ascending to the top job, IT leaders are finding more opportunities to join the executive team. The twin trends of digital transformation and the rise of big data analytics has led many enterprises to create new C-level positions directly related to technology. In fact, some industry analysts have begun to wonder if organizations have created too many new C-level technology roles. Some are forecasting that in the years ahead enterprises might be re-vamping their org structure to cut back on these new C-level positions. But for now, IT leaders seem to have more opportunities to fill C-level roles than ever before.


Applying Lean and Accelerate to Deliver Value: QCon Plus Q&A

It is important to understand that delay degrades the economic value of what we deliver - there is a cost to delays, and it can be significant. Think about the loss of opportunity or revenue if a software product is delivered late, especially in a highly competitive market segment. Delays also slow down feedback, which makes it harder to adapt to new information. You can also incur significant risk of outages or customer turnover if features are delivered late. With this in mind, just as we spend so much time optimizing and tuning the latency and throughput of our software systems, we should spend time to optimize and tune the latency and throughput of our development process. It turns out when you look at the math and dynamics of product delivery pipelines, the biggest contributor to delay is letting queues back up. Unlike in manufacturing, these queues are invisible in software development, so it is important that we make an effort to make them visible, and then address them quickly and aggressively. Two powerful ways to reduce queues are limiting work in progress and keeping your batch sizes small.


Banking Trojan Can Spy on Over 150 Financial Apps

The Kaspersky researchers first came across the Ghimob Trojan in August while examining a Windows campaign related to another malware strain circulating in Brazil. "We believe this campaign could be related to the Guildma [Brazilian banking Trojan] threat actor for several reasons, but mainly because they share the same infrastructure," according to the report. "It is also important to note that the protocol used in the mobile version is very similar to that used for the Windows version." Unlike other types of Android-focused malware, the Ghimob Trojan does not disguise itself as a legitimate app that is hidden within the official Google Play Store. Instead, the fraudsters attempt to lure victims into installing a malicious file through a phishing or spam email that suggests that the recipient has some kind of debt, according to the report. The message includes an "informational" link for the victim to click on, which starts the malware delivery. The malicious link is usually disguised to appear as either a Google Defender, a Google Doc or a WhatsApp Updater, according to the report. If opened, it installs the Ghimob Trojan within the device. The malware's first step is to check for any emulators or debuggers which, if found, are terminated.


How to stress-test your business continuity management

“You really need to be in a position to mitigate against any potential risks both before a system is live, and afterwards, so there are no nasty surprises. End to end testing of every platform, both independently and in terms of its integration with the wider network of systems, is therefore critical. However this needs to be balanced against the need to deliver with speed and certainty – so strong automated testing should be seen as a standard component of your production systems. “This will usually be provided by an independent quality assurance specialist. At Expleo we actually automate this process for clients to account for the complexity and speed of the technology and release cycles. Automated testing not only safeguards quality, but also adds value by providing immediate speed and efficiency gains. “First, ML cuts through the testing workload and sieves the data at scale, surfacing the highest-priority test cases. Then, AI analyses this data in real-time, so we can respond to risks before they become issues. This is used as the basis for predictive analysis – so you can predict where risk is going to emerge and mitigate it in the most cost effective way.”


What's next for AI: Gary Marcus talks about the journey toward robust artificial intelligence

Marcus points out this is a really deep deficiency, and one that goes back to 1965. ELIZA, the first expert system, just matched keywords and talked to people about therapy. So there's not much progress, Marcus argues, certainly not exponential progress as people like Ray Kurzweil claim, except in narrow fields like playing chess. We still don't know how to make a general purpose system that could understand conversations, for example. The counter-argument to that is that we just need more data and bigger models (hence more compute, too). Marcus begs to differ, and points out that AI models have been growing, and consuming more and more data and compute, but the underlying issues remain. Recently, Geoff Hinton, one of the forefathers of deep learning, claimed that deep learning is going to be able to do everything. Marcus thinks the only way to make progress is to put together building blocks that are there already, but no current AI system combines. ... A connection to the world of classical AI. Marcus is not suggesting getting rid of deep learning, but using it in conjunction with some of the tools of classical AI. Classical AI is good at representing abstract knowledge, representing sentences or abstractions. The goal is to have hybrid systems that can use perceptual information.


Passage of California privacy act could spur similar new regulations in other states

The COVID-19 crisis has derailed a lot of legislative activity across the country, making it difficult to get a solid sense of where privacy initiatives are headed. “The challenge you're going to find is that post-pandemic most of the state legislatures said anything that's not COVID related is not being considered,” Stockburger says. After the pandemic recedes from its urgent priority status, many states could kick new legislative efforts into gear. “Next year, that's when you're going to see big new developments and introductions,” he says. ... Another question that remains is whether the federal government will step in to create a more consistent privacy law framework. In the past, Silicon Valley giants stood staunchly opposed to the stringent provisions of the CCPA and sought a national privacy law to preempt and water down the CCPA’s requirements. However, their resistance has weakened over the past several years. “At the federal level, there's just a real challenge in getting any type of omnibus legislative efforts pushed through,” Stockburger says. “That’s been a challenge since probably 2016 when the Democrats got whooped in the midterms, and since then, we've had divided Congress.”


5 Things We’ve Learned from Digital Transformation in the Last 5 Years

While mobile offerings may have been a luxury five years ago, they are now an indispensable channel. Many organizations previously viewed mobile services as a nice-to-have, or as an offering geared towards a younger generation of tech-savvy consumers. However, now that contactless operations are the norm, offerings that incorporate mobile capture and mobile onboarding are a must-have for meeting the needs of the new digital-first consumer. From check deposits to application submissions, mobile services can go a long way in providing convenience, accessibility and ease. Organizations that embrace mobile capabilities and seamlessly connect them with back-end systems are well-positioned to enhance the customer experience and improve customer retention. Five years ago, it wasn’t uncommon for an organization’s process discovery methods to be defined by one-on-one interviews, firsthand observations and manual analysis. It was typical for business leaders to map out processes via post-it notes — what used to be referred to as “walking the wall.” Now, however, organizations are turning to machine learning and predictive analytics to discover and analyze their processes in a more accurate way.


DDoS Protection for Workloads on AWS with GWLB & DefensePro VA

There are many ways to deploy DefensePro VA with AWS Gateway Load Balancer to achieve north-south and/or east-west inspection. AWS Gateway Load Balancer adheres to multiple deployment use cases and network architectures. The AWS Gateway Load Balancer provides the VPC Endpoint Service, which allows customers to mimic on-prem networking paradigms, such as hub-and-spoke, across different VPCs and accounts. Customers can create a VPC dedicated to DDoS inspection where a group of DefensePro appliances is deployed with a Gateway Load Balancer. By utilizing AWS Ingress Routing, customers have full control of traffic routing to and from the DDoS inspection VPC. The following network topology illustrates a simplified deployment of DefensePro VA in a dedicated DDoS inspection VPC. There are two VPCs: the Customer VPC, which is Internet-facing, and DDoS-Inspection VPC. The Customer VPC has two Availability Zones for high availability of applications instances. Each zone includes an AGWe (end-point service) that steers traffic to/from the Gateway Load Balancer located in the DDoS-Inspection VPC. A group of DefensePro VAs is deployed in the DDoS Inspection VPC, spanning two Availability Zones, for high availability.


Does Your Business Need a Digital Transformation?

Because a digital transformation inevitably involves new systems, processes, and skills, it can be daunting for many leaders and teams. Embracing new technology involves a willingness to disrupt current processes and to develop new ones. This can be uncomfortable and challenging, and it’s important for leaders to acknowledge that from the outset. For many businesses, a digital transformation means completely rethinking systems and processes in order to embed technology throughout them. From the start, leadership teams need to be willing to make these major changes in order to take advantage of new tools. ... Perhaps the most important thing you can do is to prepare your team. Whenever there are major changes, leaders should expect some pushback. It’s important to anticipate and proactively address this issue to ensure that your team is ready and supportive of upcoming changes. A simple way to prepare your team is by being transparent about the planning process, goals, and anticipated shifts. Involving them in the process as much as possible will lead to increased buy-in and engagement from all levels of your team.


Stop thinking of cybersecurity as a problem: Think of it as a game

Companies can’t afford large-scale cyberattacks at any time, but especially right now. The pandemic has caused consumers who may have lost significant income to be picky with their purchases and investments. Companies need to be focused on retaining customer relationships so that they’ll weather the pandemic, and a take-down of the network could undercut customer trust in unrecoverable ways. But many companies won’t take action. They may view their older systems as good enough to ride the wave to the other side of the pandemic, and once there, they’ll go back to what they had used before, unprepared for the next attack. They may get through, but nothing will have changed — things will not go back to how they were, and you will no longer be able to rely on systems that protected a pre-COVID world. Now, there’s an opportunity to huddle up, form a new strategy, and go on the offensive. The pandemic can be an opportunity for businesses to take a look at their vulnerabilities, map their attack surface, and take appropriate actions to secure and strengthen their systems.



Quote for the day:

"Leadership is familiar, but not well understood." -- Gerald Weinberg

Daily Tech Digest - November 11, 2020

The Role of Relays In Big Data Integration

The very nature of big data integration requires an organization to become more flexible in some ways; particularly when gathering input and metrics from such varied sources as mobile apps, browser heuristics, A / V input, software logs, and more. The number of different methodologies, protocols, and formats that your organization needs to ingest while complying with both internal and government-mandated standards can be staggering. ... What if, instead of just allowing all of that data to flow in from dozens of information silos, you introduced a set of intelligent buffers? Imagine that each of these buffers was purpose-built for the kind of input that you needed to receive at any given time: Shell scripts, REST APIs, federated DB’s, hashed log files, and the like. Let’s call these intelligent buffers what they really are: Relays. They ingest SSL encrypted data, send out additional queries as needed, and provide fault-tolerant data access according to ACL’s specific to the team and server-side apps managing that dataset. If you were to set up such a distributed relay architecture to deal with your big data integration chain, it might look something like this


Malware Hidden in Encrypted Traffic Surges Amid Pandemic

Ransomware attacks delivered via SSL/TLS channels soared 500% between March and September, with a plurality of the attacks (40.5%) targeted at telecommunication and technology companies. Healthcare organizations were targeted more so than entities in other verticals and accounted for 1.6 billion, or over 25%, of all SSL-based attacks Zscaler blocked this year. Finance and insurance companies clocked in next with 1.2 billion or 18% of attacks blocked, and manufacturing organizations were the third-most targeted, with some 1.1 billion attacks directed against them. Deepen Desai, CISO and vice president of security research at Zscaler, says the trend shows why security groups need to be wary about encrypted traffic traversing their networks. While many organizations routinely encrypt traffic as part of their security best practices, fewer are inspecting it for threats, he says. "Most people assume that encrypted traffic means safe traffic, but that is unfortunately not the case," Desai says. "That false sense of security can create risk when organizations allow encrypted traffic to go uninspected."


Shadow IT: The Risks and Benefits That Come With It

Covid-19-induced acceleration of remote work has led to employees being somewhat lax about cybersecurity. Shadow IT might make business operations easier – and many companies certainly have been needing that in the last few months – but from the cybersecurity point of view, it also brings about more risks. If your IT team doesn’t know about an app or a cloud system that you’re using in your work, they can’t be responsible for any consequences of such usage. This includes those impacting the infrastructure of the entire organization. The responsibility falls on you to ensure the security of your company’s data whilst using the shadow IT app. Otherwise, your entire organization is at risk. It’s also easy to lose your data if your Shadow IT systems don’t back stuff up. If they’re your only method of storage and something goes wrong, you could potentially lose all your valuable data. If you work in government, healthcare, banking, or another heavily regulated center, chances are that you have local normative acts regulating your IT usage. It’s likely that your internal systems wouldn’t even allow you to access certain websites or apps. 


Refactoring Java, Part 2: Stabilizing your legacy code and technical debt

Technical debt is code with problems that can be improved with refactoring. The technical debt metaphor is that it’s like monetary debt. When you borrow money to purchase something, you must pay back more money than you borrowed; that is, you pay back the original sum and interest. When someone writes low-quality code or writes code without first writing automated tests, the organization incurs technical debt, and someone has to pay interest, at some point, for the debt that’s due. The organization’s interest payments aren’t necessarily in money. The biggest cost is the loss of technical agility, since you can’t update or otherwise change the behavior of the software as quickly as needed. And less technical agility means the organization has less business agility: The organization can’t meet stakeholders’ needs at the desired speed. Therefore, the goal is to refactor debt-ridden code. You’re taking the time to fix the code to improve technical and business agility. Now let’s start playing with the Gilded Rose kata’s code and see how to stabilize the code, while preparing to add functionality quickly in an agile way. One huge main problem with legacy code is that someone else wrote it. 


Interactive Imaging Technologies in the Wolfram Mathematica

A lot of mathematical problems that can be solved using computer algebra systems are constantly expanding. Considerable efforts of researchers are directed to the development of algorithms for calculating topological invariants of manifolds, knots, calculating topological invariants of manifolds of knots of algebraic curves, cohomology of various mathematical objects, arithmetic invariants of rings of integer elements in fields of algebraic numbers. Another example of ​​modern research is quantum algorithms, which sometimes have polynomial complexity, while existing classical algorithms have exponential complexity. Computer algebra is represented by theory, technology, software. The applied results include the developed algorithms and software for solving problems using a computer, in which the initial data and results are in the form of mathematical expressions, formulas. The main product of computer algebra has become computer algebra software systems. There are a lot of systems in this category, many publications are devoted to them, systematic updates are published with the presentation of the capabilities of new versions.


EU to introduce data-sharing measures with US in weeks

Companies will be able to use the assessment to decide whether they want to use a data transfer mechanism, and whether they need to introduce additional safeguards, such as encryption, to mitigate any data protection risks, said Gencarelli. The EC is expected to offer companies “non-exhaustive” and “non-prescriptive” guidance on the factors they should take into account. This includes the security of computer systems used, whether data is encrypted and how organisations will respond to requests from the US or other government law enforcement agencies for access to personal data on EU citizens. Gencarelli said relevant questions would include: What do you do as a company when you receive an access request? How do you review it? When do you challenge it – if, of course, you have grounds to challenge it? Companies may also need to assess whether they can use data minimisation principles to ensure that any data on EU citizens they hand over in response to a legitimate request by a government is compliant with EU privacy principles. The guidelines, which will be open for public consultation, will draw on the experience of companies that have developed best practices for SCCs and of civil society organisations.


Unlock the Power of Omnichannel Retail at the Edge

The Edge exists wherever the digital world and physical world intersect, and data is securely collected, generated, and processed to create new value. According to Gartner, by 2025, 75 percent6 of data will be processed at the Edge. For retailers, Edge technology means real-time data collection, analytics and automated responses where they matter most — on the shop floor, be that physical or virtual. And for today’s retailers, it’s what happens when Edge computing is combined with Computer Vision and AI that is most powerful and exciting, as it creates the many opportunities of omnichannel shopping. With Computer Vision, retailers enter a world of powerful sensor-enabled cameras that can see much more than the human eye. Combined with Edge analytics and AI, Computer Vision can enable retailers to monitor, interpret, and act in real-time across all areas of the retail environment. This type of vision has obvious implications for security, but for retailers it also opens up huge possibilities in understanding shopping behavior and implementing rapid responses. For example, understanding how customers flow through the store, and at what times of the day, can allow the retailer to put more important items directly in their paths to be more visible. 


4 Methods to Scale Automation Effectively

An essential element of the automation toolkit is the value-determination framework, which guides the identification and prioritization of automation opportunity decisions. However, many frameworks apply such a heavy weighting to cost reduction that other value dimensions are rendered meaningless. Evaluate impacts beyond savings to capture other manifestations of value; this will expand the universe of automation opportunities and appeal to more potential internal consumers. Benefits such as improving quality, reducing errors, enhancing speed of execution, liberating capacity to work on more strategic efforts, and enabling scalability should be appropriately considered, incorporated, and weighted in your prioritization framework. Keep in mind that where automation drives the greatest value changes over time depending on both evolving organizational priorities and how extensive the reach of the automation program has been. Periodically reevaluate the value dimensions of your framework and their relative weightings to determine whether any changes are merited. Typically, nascent automation programs take an “inside-out” approach to developing capability, where the COE is established first and federation is built over time as ownership and participation extends radially out to business functions and/or IT. 


Digital transformation: 5 ways to balance creativity and productivity

One of the biggest challenges is how to ensure that creative thinking is an integral part of your program planning and development. Creativity is fueled by knowledge and experience. It’s therefore important to make time for learning, whether that’s through research, reading the latest trade publication, listening to a podcast, attending a (virtual) event, or networking with colleagues. It’s all too easy to dismiss this as a distraction and to think “I haven’t got time for that” because you can’t see an immediate output. But making time to expand your horizons will do wonders for your creative thinking. ... However, the one thing we initially struggled with was how to keep being innovative. We were used to being together in the same room, bouncing ideas off one another, and brainstorms via video call just didn’t have the same impact. However, by applying some simple techniques such as interactive whiteboards and prototyping through demos on video platforms, we’ve managed to restore our creative energy. To make it through the pandemic, companies have had to think outside the box, either by looking at alternative revenue streams or adapting their existing business model. Businesses have proved their ability to make decisions, diversify at speed, and be innovative. 


Google Open-Sources Fast Attention Module Performer

The Transformer neural-network architecture is a common choice for sequence learning, especially in the natural-language processing (NLP) domain. It has several advantages over previous architectures, such as recurrent neural-networks (RNN); in particular, the self-attention mechanism that allows the network to "remember" previous items in the sequence can be executed in parallel on the entire sequence, which speeds up training and inference. However, since self-attention can link each item in the sequence to every other item, the computational and memory complexity of self-attention is O(N2)O(N2), where N is the maximum sequence length that can be processed. This puts a practical limit on sequence length of around 1,024 items, due to the memory constraints of GPUs. The original Transformer attention mechanism is implemented by a matrix of size NxN, followed by a softmax operation; the rows and columns represent queries and keys, respectively. The attention matrix is multiplied by the input sequence to output a set of similarity values. Performer's FAVOR+ algorithm decomposes the matrix into two matrices which contain "random features": random non-linear functions of the queries and keys. 



Quote for the day:

"Don't let your future successes be prisoners of your past failure, shape the future you want." -- Gordon Tredgold