Daily Tech Digest - October 28, 2020

IT leaders adjusting to expanded role and importance since coronavirus pandemic

"IT had to ensure that their technical environment could handle the increased online demand, as well any downstream impacts to supply chain, logistics and payment applications all connected to the online engine keeping the company operating and in business. IT had to refocus efforts to enable more robust customer engagements remotely via applications and web portals." She said the best examples of this are insurance claims, government services and applications, most of which were not submitted or enabled via an application or web portal before the COVID-19 pandemic. Despite the increase in importance due to the pandemic, IT has been gaining prominence within enterprises for years, Doebel said. IT has long been moving towards the role of business-critical for several years now as technology and innovation have become synonymous with business growth and improved customer experiences.  IT teams rose to the occasion during the COVID-19 breakout and continue to drive innovation and transformation in these challenging times, she added. Important business decisions are now being put in the hands of IT workers who have to think of ways to future-proof their organizations.


5 famous analytics and AI disasters

In October 2020, Public Health England (PHE), the UK government body responsible for tallying new COVID-19 infections, revealed that nearly 16,000 coronavirus cases went unreported between Sept 25 and Oct 2. The culprit? Data limitations in Microsoft Excel. PHE uses an automated process to transfer COVID-19 positive lab results as a CSV file into Excel templates used by reporting dashboards and for contact tracing. Unfortunately, Excel spreadsheets can have a maximum of 1,048,576 rows and 16,384 columns per worksheet. Moreover, PHE was listing cases in columns rather than rows. ... The "glitch" didn't prevent individuals who got tested from receiving their results, but it did stymie contact tracing efforts, making it harder for the UK National Health Service (NHS) to identify and notify individuals who were in close contact with infected patients. In a statement on Oct. 4, Michael Brodie, interim chief executive of PHE, said NHS Test and Trace and PHE resolved the issue quickly and transferred all outstanding cases immediately into the NHS Test and Trace contact tracing system. PHE put in place a "rapid mitigation" that splits large files and has conducted a full end-to-end review of all systems to prevent similar incidents in the future.


Legal and security risks for businesses unaware of open source implications

The sobering reality is that compliance is not keeping up with usage of open source codebases. In view of this, businesses have to consider the impact of open source software in their operations as they move forward in a digitally connected world. Whether they are developing a product using open source components or involved in mergers and acquisitions activity, they have to conduct due diligence on the security and legal risks involved. One approach that has been proposed is to have a Bill of Materials (BOM) for software. Just like BOM used commonly by manufacturers of hardware, such as smartphones, a BOM for software will list the components and dependencies for each application and offer more visibility. In particular, a BOM generated by an independent software composition analysis (SCA) will offer advanced understanding for businesses seeking to understand the foundation on which they are building so many of their applications. Awareness is key to improvement. For starters, businesses cannot patch what they don't know they have. Patches must match source, so they know their code's origin. Open source is not only about source, either. 


Building a hybrid SQL Server infrastructure

The solution to this challenge is to build a SANless failover cluster using SIOS DataKeeper. SIOS DataKeeper performs block-level replication of all the data on your on-prem storage to the local storage attached to your cloud-based VM. If disaster strikes your on-prem infrastructure and the WSFC fails SQL Server over to the cloud-based cluster node, that cloud-based node can access its own copy of your SQL Server databases and can fill in for your on-prem infrastructure for as long as you need it to. One other advantage afforded by the SANless failover cluster approach is that there is no limit on the number of databases you can replicate. Where you would need to upgrade to SQL Server Enterprise Edition to replicate your user databases to a third node in the cloud, the SANless clustering approach works with both the SQL Server Standard and Enterprise editions. While SQL Server Standard Edition is limited to two nodes in the cluster, DataKeeper allows you to replicate to a third node in the cloud with a manual recovery process. With Enterprise Edition the third node in the cloud can simply be part of the same cluster.


Why Enterprises Struggle with Cloud Data Lakes

The success of any cloud data lake project hinges on continual changes to maximize performance, reliability and cost efficiency. Each of these variables require constant and detailed monitoring and management of end-to-end workloads. Consider the evolution of data processing engines and the importance of leveraging the most advantageous opportunities around price and performance. Managing workload price performance and cloud cost optimization is just as crucial to cloud data lake implementations, where costs can and will quickly get out of hand if proper monitoring and management aren’t in place. ... Public cloud resources aren’t private by default. Securing a production cloud data lake requires extensive configuration and customization efforts–especially for enterprises that must fall in line with specific regulatory compliance oversights and governance mandates (HIPAA, PCI DSS, GDPR, etc). Achieving the requisite data safeguards often means enlisting experienced and dedicated teams who are equipped to lock down cloud resources and restrict access to only users that are authorized and credentialed.


The No-Code Generation is arriving

Of course, no-code tools often require code, or at least, the sort of deductive logic that is intrinsic to coding. You have to know how to design a pivot table, or understand what machine learning capability is and what it might be useful for. You have to think in terms of data, and about inputs, transformations and outputs. The key here is that no-code tools aren’t successful just because they are easier to use — they are successful because they are connecting with a new generation that understands precisely the sort of logic required by these platforms to function. Today’s students don’t just see their computers and mobile devices as consumption screens and have the ability to turn them on. They are widely using them as tools of self-expression, research and analysis. Take the popularity of platforms like Roblox and Minecraft. Easily derided as just a generation’s obsession with gaming, both platforms teach kids how to build entire worlds using their devices. Even better, as kids push the frontiers of the toolsets offered by these games, they are inspired to build their own tools. There has been a proliferation of guides and online communities to teach kids how to build their own games and plugins for these platforms (Lua has never been so popular).


Digital transformation: 4 contrarian tips for measuring success

A CIO once told me that his employees felt confused about how their transformation progress was going. I asked, “How many transformations are you doing right now?” He started listing and realized that his team had 15 simultaneous ongoing changes. Worse, every change included different touchpoints for every individual end user, which created even more confusion for those who didn’t understand why the change was happening. Every incremental digitalization initiative should have a person or team responsible for it – the CIO, CTO, or CEO, or perhaps the internal services organization if it’s driving internal efficiency. In the cases of disruptive innovation, it should take place where it's easy to let go of the past ways of doing things, typically in a separate innovation unit. Measure the outcomes you’re looking to achieve and communicate from an outcome perspective, often through a story – and if your transformation does not fit into your objectives and key results or KPIs ... However, too much of either can hurt your progress and indicate a wider problem in your organization: Either you sweep negative feedback under the rug and focus only on the positive, which creates a culture of fear, or you focus only on the negative and forget to celebrate the good stuff, which can destroy motivation and cause a complaint culture.


Role Of E-Commerce In Driving Technology Adoption For Indian Warehousing Sector

Global supply chains and logistics sectors have undergone a major disruption during the past few months, thanks to the pandemic. Several first-time users logged on to e-commerce websites to make safe, virtual purchases for essentials and had a contactless delivery experience at their doorstep. The sector also witnessed a major shift in popular categories, from luxury and lifestyle purchases to shopping for basic essentials such as groceries, medicines, office and school supplies, e-learning tools and even food delivery. As per an impact report released by Uni-commerce, titled E-commerce Trends Report 2020, e-commerce has witnessed an order-volume growth of 17 per cent as of June 2020, and about 65 per cent growth in single brand e-commerce platforms. However, in-spite of challenges such as manufacturing slowdown, shortage of labour, transportation bottlenecks, and disruption in national and international movement of cargo, the massive rise of e-commerce has brought about faster digital adoption and enhanced the potential for overall growth of the sector. With a focus on meeting consumer expectations for speedy delivery, customization, product availability and easy returns while handling complex globalization of supply chains, warehousing trends have witnessed major shifts.


A robot referee can really keep its ‘eye’ on the ball

Human umps may feel hot or tired. They may have the sun in their eyes or become distracted by a mosquito. They may even unintentionally favor players of certain nationalities, races, ages or backgrounds. A machine will not experience any of these problems. So how does the machine do it? Engineers must first spend several days setting up each stadium that will use the system. They measure the precise position of all the lines and “create a virtual-reality world to mirror what is in the stadium,” explains Hicks. They also set up 12 cameras. These will watch every part of the area where the game takes place. Then the engineers run tests — lots of them — to make sure everything works as it should. During a match, those cameras capture a ball’s flight. Software finds the tennis ball in the video. It can do this in bright, overcast or shadowy conditions. A video camera doesn’t capture every single moment of the ball’s flight, however. It actually takes many still photos very quickly. The number of photos it can take in one second is called the frame rate. In each frame, the ball will be in a new position. The system uses math to calculate a smooth path between all these positions. It also takes wind conditions into account.


That dreadful VPN might finally be dead thanks to Twingate

So what does Twingate ultimately do? For corporate IT professionals, it allows them to connect an employee’s device into the corporate network much more flexibly than VPN. For instance, individual services or applications on a device could be setup to securely connect with different servers or data centers. So your Slack application can connect directly to Slack, your JIRA site can connect directly to JIRA’s servers, all without the typical round-trip to a central hub that VPN requires. That flexibility offers two main benefits. First, internet performance should be faster, since traffic is going directly where it needs to rather than bouncing through several relays between an end-user device and the server. Twingate also says that it offers “congestion” technology that can adapt its routing to changing internet conditions to actively increase performance. More importantly, Twingate allows corporate IT staff to carefully calibrate security policies at the network layer to ensure that individual network requests make sense in context. For instance, if you are salesperson in the field and suddenly start trying to access your company’s code server, Twingate can identify that request as highly unusual and outright block it.



Quote for the day:

"In simplest terms, a leader is one who knows where he wants to go, and gets up, and goes." -- John Erksine

No comments:

Post a Comment