Daily Tech Digest - May 18, 2020

Creating a safe path to digital with open standards


Despite the process automation industries being vastly different in their outputs, there are many commonalities in the desire for efficiency, interoperability and the ability to integrate best-in-class technologies. Recognizing the need for cross-industry collaboration, a group of companies representing a variety of verticals got together three years ago to discuss the possibility of developing an open standard for process automation. Each company in attendance was driven by the need for more flexible solutions. Shortly after, the Open Process Automation Forum (OPAF) was born under the guidance of The Open Group. Since then, the Forum has worked to lay the foundations for developing a standard to ensure the security, interoperability and scalability of new control systems. A year ago, over 90 member organizations were involved with the creation of OPAF’s O-PAS Standard, Version 1.0, which is now a full standard of The Open Group. While industry standards for process automation are already available in the marketplace and fit-for-purpose, the O-PAS Standard focuses on interoperability, using existing industry standards and adopting and adapting them to create a “standard of standards.”


Should AI assist in surgical decision-making?

surgery.jpg
Fully automated surgeries performed by robots is still a ways off. In the meantime, developers are trying to beat those grim numbers by harnessing the best of human decision making and coupling it with truly exceptional technology tools designed to assist surgeons. Artificial intelligence and machine learning are often touted as solutions for call centers and to provide intelligent insights to companies that have reams of data that needs to be processed, but leveraging AI/ML to better medical outcomes could be one of the transformative technologies of our time. "Surgical decision-making is dominated by hypothetical-deductive reasoning, individual judgment, and heuristics," write the authors of a recent JAMA Surgery paper called Artificial Intelligence and Surgical Decision-making. "These factors can lead to bias, error, and preventable harm. Traditional predictive analytics and clinical decision-support systems are intended to augment surgical decision-making, but their clinical utility is compromised by time-consuming manual data management and suboptimal accuracy."


Home office technology will need to evolve in the new work normal


Technology will have to know our contexts. The home technology experience will have to adapt to our various modes and have the capacity to manage the compute requirements. "There is a very large innovation cycle coming to really make the world at home adaptable to all of these contexts as we look forward," said Roese. Edge computing will come to the home. As remote work evolves more to include augmented and virtual reality as well as video conferencing and data intensive applications IT infrastructure at home will change. Roese said that edge computing devices may be deployed in homes by enterprises to beef up home infrastructure. "Early, when we were talking about edge, it was all about smart factories and smart cities and smart hospitals, but there's another class of edge compute that's really interesting in this new world," said Roese. "And that is to augment the compute capacity of the devices that attach to that edge."  5G, AR, VR and applications that need horsepower would use these edge compute devices. Edge computing in the home could provide more real-time experiences, compute capacity and improve experiences.These edge devices at home would also offer scale on demand.



Grafana: The Open Observability Platform

Grafana is open-source visualization and analytics software that works with lots of different databases and data sources. It connects to data regardless of where it resides — in the cloud, on-premises, or somewhere else — and helps organizations build the perfect picture to help them understand their data. Perhaps Grafana's most unique feature is that its data source neutral, meaning it doesn't matter where your data is stored, Grafana can unify it. These sources can include time-series, logging, SQL and document databases, cloud data sources, enterprise plugins, and more options from community-contributed plugins. No matter the source, the data stays where it is, and you can visualize and analyze it at will. This makes Grafana a versatile tool and open to use for a wide range of applications. There is one caveat to the statement above, and that's that for Grafana to be useful, your data should be time-series data, i.e., data taken at particular points in time. This describes a lot of data sources, but not all of them.


Why open source is heading for a new breakthrough


While anticipating an increase in uptake, Miller doesn't anticipate Apple and Microsoft fans to begin jumping ship en masse – indeed, he acknowledges the platform will likely retain its more geeky audience. But that's not to say that Fedora 32 Workstation doesn't have the technical chops to go toe-to-toe with mainstream operating systems, with Miller alluding to the huge advances that Linux as a desktop has made over the past 15 years as it has moved from the server to being the default choice for embedded everything everywhere. "It's so flexible and so able to fit into all of these different use cases," he says. "To me, it's clear that Linux is technically superior." And he adds: "It's not a money-saver option – this is something you should pick if you actually want this." Of course, the technical capability of Fedora is just one small piece of the package that forms the philosophy not just of Fedora Workstation but Linux and the open-source community in its entirety. "The real appeal of it is that this is an operating system that we own. It belongs to the people," he says. Looking to the future, Miller sees Linux as well-positioned to capitalize on the move to hybrid-type mobile devices, particularly as more OEMs throw their support behind the platform.


Will the solo open source developer survive the pandemic?

The last several weeks have been anything but. I’m not alone in finding it rough-going. For Julia Ferraioli, this isn’t because of “WFH.” It’s because of “WDP” [working during pandemic]: “I’ve been working remotely for 2.5 years. The past 2.5 months have left me more exhausted than ever before. This is your reminder that you’re not working remotely. You’re working remotely during a global health crisis.” This same pressure applies to open source maintainers, Fischer says: Today independent maintainers are, like many people, under more time and financial pressure than they were only a month or two ago. Most of these creators work on their projects on the side — not as their main day jobs — and personal and professional obligations come before open source work for many. Even before the coronavirus pandemic hit, this was a true statement. In my interviews with a diverse range of open source maintainers, from curl’s Daniel Stenberg SolveSpace’s Whitequark, most have contributed as a side project, not their day job.


Why a pandemic-specific BCP matters


If you have not already done so, your organisation should develop BCPs specific to a pandemic or epidemic. Most existing BCPs address business recovery and resumption after events such as extreme weather, terrorism and power outages, but do not adequately address the repercussions of a pandemic. Unlike these other risks, disease outbreaks affect people more than they do datacentres and corporate facilities, and their duration is much longer. As already seen, disease outbreaks can flare up, subside, and then flare up again. Forrester recommended a three-step process to ensure that a pandemic response plan is thorough and effective. That includes identifying an executive sponsor and building a pandemic planning team, assessing critical operations, supplier and customer relationships, as well as the impact on the workforce. According to Forrester’s data and its own direct experience, organisations still fail to exercise their plans on a regular basis.


Time is Running Out on Silverlight

This situation came about because Silverlight is not a stand-alone platform, it requires a browser to host it. And in a way, it was doomed from the start. Silverlight was first released in 2007, the very same year that Apple announced that it won’t support browser plugins such as Adobe Flash for iPhone. This essentially killed the consumer market for Silverlight, though it did live on for a while thanks to streaming services such as Netflix. Currently the only browsers that continues to run Silverlight are Internet Explorer 10 and 11. “There is no longer support for Chrome, Firefox, or any browser using the Mac operating system.” While Silverlight is essentially gone from the public web, it did get some popularity was internal applications. For many companies this was seen as a way of quickly building line-of-business applications with better features and performance than HTML/JavaScript applications of the time. Such applications would normally be written in WinForms or WPF, but Silverlight made deployment and updating easier.


How Technologists Can Translate Cybersecurity Risks Into Business Acumen

Photo:
The technology space can easily seem abstract, and therefore confusing and overwhelming. To alleviate the fear that stems from uncertainty, technologists can distill foundational principles into checkpoints that empower business people to ask the right questions in the right environment. A good place to start is by establishing the top metrics affecting an organization by answering questions such as, “Does the organization have subject matter experts leading security?” “Who is assigned to manage this specific piece of technology?” “How do we measure this space?” “What portion of the budget is invested in protecting this technology?” “How does this technology tie into our broader risk appetite statement?” You may well find that how you measure these risks is your greatest risk. Most organizations assess risk on a quarterly basis, in addition to an annual deep-dive. In general, the more time devoted to assessing and reassessing cybersecurity threats and technology, the better. One of the foundational principles of security and risk management is that the efficacy of controls degrades over time. Technology is analogous to topography in this regard; just as you would expect natural elements like water and wind to erode a stone wall over time, technology’s architecture will likewise deteriorate – only much more quickly.


Data protection and GDPR: what are my legal obligations as a business?

Data protection and GDPR: what are my legal obligations as a business? image
The GDPR requires that anyone holding or processing personal data take both ‘technical’ and ‘organisational’ measures to ensure that personal data is secure and that data subjects’ rights are maintained. Technical measures refer to firewalls, password protection, penetration testing etc. and anyone holding personal data on electronic systems should consult with IT professionals to ensure that adequate security measures are in place to protect data. Organisational measures refers to internal policies, staff training etc. Ideally businesses will have both internal data protection policies and a program of staff training (often this is done online). ... Some countries have been deemed to have an adequate data protection framework (e.g. Switzerland, Canada) and data can be transferred to these territories (but note that any processors will still need to enter into a formal processing agreement as described above). If you are transferring to a US company then they may be certified under the “Privacy Shield” framework which allows for transfers to those specific companies.



Quote for the day:


"Time is neutral and does not change things. With courage and initiative, leaders change things." -- Jesse Jackson


Daily Tech Digest - May 17, 2020

Self-supervised learning is the future of AI

Supervised deep learning has given us plenty of very useful applications, especially in fields such as computer vision and some areas of natural language processing. Deep learning is playing an increasingly important role in sensitive applications, such as cancer detection. It is also proving to be extremely useful in areas where the scale of the problem is beyond being addressed with human efforts, such as—with some caveats—reviewing the huge amount of content being posted on social media every day. “If you take deep learning from Facebook, Instagram, YouTube, etc., those companies crumble,” LeCun says. “They are completely built around it.” But as mentioned, supervised learning is only applicable where there’s enough quality data and the data can capture the entirety of possible scenarios. As soon as trained deep learning models face novel examples that differ from their training examples, they start to behave in unpredictable ways. In some cases, showing an object from a slightly different angle might be enough to confound a neural network into mistaking it with something else.


Banks Need to Learn What Big Tech Teaches

Tech advancements can be revolutionary. Let’s consider the case of the smartphone – be it iPhone or Android, a smartphone is essentially a group of services packaged together in a physical phone. Those services put an amazing amount of power and capabilities literally into the palm of your hand. Software updates occur frequently (talk about a rapid pace of change), yet users are supremely indifferent – and often unaware – of which version operating system they are using… regardless, they welcome new features that are delivered as part of nonintrusive upgrades that are installed while they sleep or at whatever time they specify. Similarly, smart-equipped cars such as Tesla regularly receive over-the-air software updates that add new features and enhance functionality. No one asked for the addition of Tesla’s Sentry Mode (not even Tesla) when the car was designed. It was an afterthought (albeit a brilliant one), delivered as part of a continuous upgrade. Now drivers can monitor their Tesla wherever it’s parked and receive alerts whenever a security incident occurs.


Enabling Manufacturing using IOTA – A possible approach post Covid-19 paradigm


Internet of Things is no more a technological breakthrough. Industrial applications have been faster in adoption of IoT and it has been playing a significant role for businesses that requires internal tracking, attaining near to zero error with less manual intervention, enabling machine to machine talking along with prognostic maintenance. RFID chips and other sensors are much cheaper in terms of cost and easier to manufacture than most of the sizeable and lumbering consumer electronics. The future of IoT will continue in these lines, especially post COVID with lot of manufacturing concerns embracing automation at a massive scale gradually shaping the smart industrial applications concept. However, the block-chain of IoT also calls for distributed and secure exchange of data captured through these sensors or devices. The interconnection of block-chain technology and IoT have been in the scenario since 2015, to solve critical IoT challenges related to security and data privacy. The IOTA protocol has been able to enter into collaborations which technically differentiates itself from most of the cryptocurrencies by its underlying technology that uses Directed Acyclic Graph (DAG) as a distributed ledger which stores the transactional data of the IOTA network, instead of block-chain enabled transactions.


A Reassessment of Enterprise Architecture Implementation

The research question in this contribution is: What are factor combinations for successful EA implementation beyond the mere notion of maturity? As a basis of our analysis we will employ a description framework which has been developed in the course of various practitioners’ workshops over the last eight years. Based on this description framework we will analyze six cases and discuss why certain companies have been rather successful in implementing EA while others did not leverage their EA invest. The analysis will show that EA success is not necessarily a matter of maturity of a number of EA functions but a complex set of factors that have to be observed for implementing EA. Also there is no perfect set of EA factor combinations guaranteeing successful EA because EA always is part of a complex socio-technical network. However, we will identify successful factor combinations as well as common patterns prohibiting EA success.


Supercomputers hacked across Europe to mine cryptocurrency

meet-europes-new-supercomputer-marenostr-5d0229e6fe727300c4d980d6-1-jun-16-2019-14-08-02-poster.jpg
The malware samples were reviewed earlier today by Cado Security, a US-based cyber-security firm. The company said the attackers appear to have gained access to the supercomputer clusters via compromised SSH credentials. The credentials appear to have been stolen from university members given access to the supercomputers to run computing jobs. The hijacked SSH logins belonged to universities in Canada, China, and Poland. Chris Doman, Co-Founder of Cado Security, told ZDNet today that while there is no official evidence to confirm that all the intrusions have been carried out by the same group, evidence like similar malware file names and network indicators suggests this might be the same threat actor. According to Doman's analysis, once attackers gained access to a supercomputing node, they appear to have used an exploit for the CVE-2019-15666 vulnerability to gain root access and then deployed an application that mined the Monero (XMR) cryptocurrency. Making matters worse, many of the organizations that had supercomputers go down this week had announced in previous weeks that they were prioritizing research on the COVID-19 outbreak, which has now most likely been hampered as a result of the intrusion and subsequent downtime.


How AI & Blockchain Can Reshape Healthcare Industry?

Healthcare
Blockchain technology is one of the most important and disruptive technologies in the world that is being used to unlock unexplored innovations in the healthcare industry. Blockchain technology is expected to improve medical record management and the insurance claim process, accelerate clinical and biomedical research and advance biomedical and healthcare data ledger. These expectations are based on the key aspects of blockchain technology, such as decentralized management, immutable audit trail, data provenance, robustness, and improved security and privacy. Although several possibilities have been discussed, the most notable innovation that can be achieved with blockchain technology is the recovery of data subjects’ rights. Medical data should be possessed, operated, and allowed to be utilized by data subjects other than hospitals. This is a key concept of patient-centered interoperability that differs from conventional institution-driven interoperability. There are many challenges arising from patient-centered interoperability, such as data standards, security, and privacy, in addition to technology-related issues, such as scalability and speed, incentives, and governance.


Five Strategies for Putting AI at the Center of Digital Transformation


Specifically, quick wins are smaller projects that involve optimizing internal employee touch points. For example, companies might think about specific pain points that employees experience in their day-to-day work, and then brainstorm ways AI technologies could make some of these tasks faster or easier. Voice-based tools for scheduling or managing internal meetings or voice interfaces for search are some examples of applications for internal use. While these projects are unlikely to transform the business, they do serve the important purpose of exposing employees, some of whom may initially be skeptics, to the benefits of AI. These projects also provide companies with a low-risk opportunity to build skills in working with large volumes of data, which will be needed when tackling larger AI projects. The second part of the portfolio approach, long-term projects, is what will be most impactful and where it is important to find areas that support the existing business strategy.


For all its sophistication, AI isn't fit to make life-or-death decisions

Face-detection surveillance is one way technology can help to track the spread of Covid-19.
Reckoning is essentially calculation: the ability to manipulate data and recognise patterns. Judgment, on the other hand, refers to a form of “deliberative thought, grounded in ethical commitment and responsible action, appropriate to the situation in which it is deployed”. Judgment, Smith observes, is not simply a way of thinking about the world, but emerges from a particular relationship to the world that humans have and machines do not. Humans are both embodied and embedded in the world. We are able to recognise the world as real and as unified but also to break it down into distinct objects and phenomena. We can represent the world but also appreciate the distinction between representation and reality. And, most importantly, humans possess an ethical commitment to the real over the representation. What is morally important is not the image or mental representation I have of you, but the fact that you exist in the world. A system with judgment must, Smith insists, not simply be able to think but also to “care about what it is thinking about”. It must “give a damn”. Humans do. Machines don’t.


The Different Kind of Value That EA & EA Framework Return to the Enterprise

core modeling template and framework modeling concept
Reference Architecture is a generic architecture adopted as a standard for the analysis and design of systems in the same class. To be validated as a reference, rather than declared as such by its promoters, a generic architecture must be adopted enough, having been reused and proved in many developments. A reference architecture, in addition to a generic architecture, exhibits the benefits of standards. A reference architecture facilitates wide acceptance and reuse, predictable and comparable designs, reproducibility and as such productivity which saves time and costs. TOGAF is no reference architecture though because it proposes no architecture. It is called a standard though because is specified by a standards organization with wide industry participation. TOGAF is not even a standard enterprise architecture method though because it is hard to comply or prove compliance with it with due to its size and organic organisation and, most importantly, it does not deliver the enterprise architecture we are after but most good development practices.


Change-mapping: Plan and Action

In reality, that apparent sequence exists only because of the dependencies between each of those domains: we need to know something about Context in order to define Scopes, we need to know Scope-boundaries for any Plan, we need to be clear about the Plan and preparation before we start any Action, and we need the results of any Action, and all the setup and Scope and Context, before we can do the respective Review. There may well be quite a lot of back-and-forth between the domains as details get fleshed out and call for a rethink of what happened earlier, which would break up the sequence somewhat. And there can also be multiple instances of each domain: a context may spin off several Scopes, a Scope may require multiple projects or Plans, and each Plan my have multiple Actions, each of which will require their own Review. In that sense, no, it’s not just a straightforward single-pass linear sequence: it can often be a lot more complex than that. Yet the overall flow does line up well with that pattern – which is why it’s simplest to show it that way.



Quote for the day:


"Mistakes are always forgivable, if one has the courage to admit them." -- Bruce Lee


Daily Tech Digest - May 16. 2020

Why fuzzing is your friend for DevSecOps

Analyze
Those just starting out should try open source tools. The two most popular today are AFL and libfuzzer, both primarily targeted at developers who have source code access (more on what to do without developer participation later). These tools focus on applications that are compiled, such as apps written in C and C++. Some fuzzers, predominantly commercial products, offer the ability to analyze compiled code, even without developer participation. For example, the Defense Advanced Research Projects Agency ran a Cyber Grand Challenge to see if fully autonomous cybersecurity (both offense and defense) was possible, without any developer involvement or source code. Tools derived from that competition can now analyze production environment applications from Ada, Go, Rust, Jovial and compiled binaries. One limitation today is that most tools focus on code that runs (or can be compiled for) Linux. Unfortunately, good fuzzing tools are hard to find for non-Linux based systems, such as Windows or embedded operating systems.



How to use tags in Microsoft Teams

Microsoft seems to have thought of everything when it comes to its Teams collaboration app; unfortunately, that means there's a lot packed into a relatively simple interface. Some items are located in difficult to find places, and this includes the tagging function team owners can use to create small groups of people inside of teams for easier communication. Tags can be created for particular projects, sub teams inside particular departments, or any other group that needs to communicate easily through a simple "at" mention in the Microsoft Teams chat window. There are a few tricks to knowing how to use tags in Microsoft Teams--once you have it down, though, it's easy. To start, you'll need to figure out if you have the ability to create tags in Microsoft Teams. For individuals or small business Microsoft Teams leaders, this is something you can set inside the Teams app. If you're using Microsoft Teams in an enterprise, you'll need to contact a Teams admin to make this change in the Teams Admin Center, which is a cloud-based administrator console.


Fight microservices complexity with low-code development

API gateway dream vs. reality
Microservices independently communicate with one another over internet standards, which is what makes the architecture powerful. Because they speak TCP/IP and deliver data payloads in JSON, the components work together without dependencies. These small services each perform one task well. A company can have a set of services for customer information, another for product lookup, a third for orders and a fourth for delivery. But breaking things down along business functions means there's a lot of code to manage. When something goes wrong, application teams require specialized observability tools that trace the entire chain of events to debug. Microservices requires logging and monitoring work that exists outside the idea of simple components. That creates an explosion of code just to make the app code work. When something goes wrong, figuring out which component contributed to the issue can be tricky without the right tools -- which, again, means more code. While each service has high uptime in this supported deployment, resilience and reliability at the code level start to crumble.


How Google and Microsoft are cleaning up crowded browsers

In any case, Google is again turning its attention to tabs. In Chrome OS 81, it has added graphical site previews to touch-friendly tabs that appear with a swipe down from the top. The experience evokes the way Internet Explorer handled them back in the Windows RT days. Like other Chrome OS touch accommodations, it functions only when a Chromebook is in "tablet mode," i.e., when no keyboard is attached. Following this come reports that the company will formalize the grouping of tabs for better organization in Chrome, which has been available on an experimental basis. Both moves come on the heels of Microsoft demonstrating vertical tabs coming to Edge, announced as part of the Microsoft 365. These may not be as useful for organization as Chrome's tab grouping (the utility of which can also be addressed with multiple windows and even multiple desktops) and won't do much for touch friendliness, but it's easy to see how a grouping function could be added in the future. Even at launch, vertical tabs will do a better job at distinguishing among tab titles as the number of open tabs in a window grows.


U.S. Secret Service: “Massive Fraud” Against State Unemployment Insurance Programs


A federal fraud investigator who spoke with KrebsOnSecurity on condition of anonymity said many states simply don’t have enough controls in place to detect patterns that might help better screen out fraudulent unemployment applications, such as looking for multiple applications involving the same Internet addresses and/or bank accounts. The investigator said in some states fraudsters need only to submit someone’s name, Social Security number and other basic information for their claims to be processed. The alert follows news reports by media outlets in Washington and Rhode Island about millions of dollars in fraudulent unemployment claims in those states. On Thursday, The Seattle Times reported that the activity had halted unemployment payments for two days after officials found more than $1.6 million in phony claims. “Between March and April, the number of fraudulent claims for unemployment benefits jumped 27-fold to 700,” the state Employment Security Department (ESD) told The Seattle Times.


Which Agile contract type fits your project and budget?

Rather than see a software project to fruition as one large batch of work spanning several months, Agile breaks the work into manageable, adaptable and valuable segments. Some organizations can't handle restructuring for Agile, or they lack the resources to develop all their software projects in house. Outsourcing seems like the way to adopt Agile and reap its benefits. "We're starting to see projects that are handed over to a vendor -- a whole development effort, and they want the vendor to do it on an Agile basis," said Chris Powers, vice president of services at ClearEdge Partners, a consulting firm based in Boston. Powers hosted a webinar called Agile Contracting Best Practices, covering challenges in choosing a third-party development partner, and common types of contracts. Just as organizations cannot simply flip a switch to become Agile, they can't expect to outsource Agile work without giving up their Waterfall methodology. Agile work can fall under fixed-fee and time and materials (T&M) agreements that hardly differ from Waterfall approaches.


The Future of Data Architecture


Along with the emergence of dashboards and information reporting, he said, there was a strong desire to have access to analytics on the phone, because executives needed to be able to see their numbers anytime, anywhere. Now responsive design makes it possible for the output format to be decoupled from the analytics programming calculation, and the receiver can choose their form factor independently of the creation of the analytics itself. “Phones and mobile analytics used to be super-hot. Now they’ve settled down, and now they’re just part of the fabric of everything that we’re doing.” “It was the peak of hilarity to me that when we first started talking about the Internet of Things, we were saying, ‘Okay, the Twitter-enabled refrigerator.’ You remember that?” Not surprisingly, refrigerators with a screen enabling tweets from the kitchen have not become commonplace. “Who thought that was really going to help?” Algmin said that we’ve reached a point where many organizations have a Chief Data Officer or CDO equivalent, because they recognize that they want more from their data.


Language and Platform for Cloud-Era Application Developers

For decades, programming languages have treated networks simply as I/O sources. Because of that, to expose simple APIs, developers have to implement these services by writing an explicit loop that waits for network requests until a signal is obtained. Ballerina treats the network differently by making networking abstractions like client objects, services, resource functions, and listeners a part of the language’s type system so you can use the language-provided types to write network programs that just work. Using service type and a listener object in Ballerina, developers can expose their APIs by simply writing API-led business logic within the resource function. Depending on the protocol defined in the listener object, these services can be exposed as HTTP/HTTPS, HTTP2, gRPC, and WebSockets. Ballerina services come with built-in concurrency. Every request to a resource method is handled in a separate strand (Ballerina concurrent unit) and it gives implicit concurrent behavior to a service.


5 Ways to Make the Most of Your Enterprise Architecture and Hybrid Cloud Strategy


As organizations have embraced DevOps and agile methodologies, IT teams are looking for ways to speed up the development process. They use a public cloud to set up and do application development, because it’s very simple and easy to use, so you can get started quickly. But once applications are ready to deploy in production, enterprises may move them back to the on-premises data center for data governance or cost reasons. The hybrid cloud model makes it possible for an organization to meet its needs for speed and flexibility in development, as well as its needs for stability, easy management, security, and low costs in production. If your DevOps team is using cloud resources to build an application for speed, simplicity and low cost, you can use PubSub+ Event Broker: Software brokers or PubSub+ Event Broker: Cloud, our SaaS, in any public or private cloud environment. And if you’re moving an application to an on-premises datacenter when going into production for security purposes, you can simply move the application without having to rewrite the event routing. It’s just like the lift-and-shift use case described above, but in reverse.


How to manipulate hierarchical information in flat relational database tables

A document management system would help to create, keep and disseminate knowledge to other people to learn how to deliver and execute Linux based projects. However, since I had no budget, I could not purchase any document management software. So with free A.S.P., Notepad, IIS Express, SQL Server Express and Gimp, I created a document management website to hold documents. The first system I created was simple. The parent folders or categories and documents are shown on the home page. Clicking on a folder or category name or document opened it up in the next page. This was horrible and slow. So I racked my brains for a couple of months on how to do it better. Finally, I came up with this algorithm which was 1.10.8 based. Wrote the horrible A.S.P. ultra-complicated code in Notepad (no budget for Visual Studio license) built the functional document management website. All the other C.O.E.'s started using my website too as they liked it and all needed a Document Management system which they had no budget to purchase.



Quote for the day:


"We are what we repeatedly do. Excellence therefore is not an act, but a habit." -- Aristotle


Daily Tech Digest - May 15, 2020

The Past, Present, and Future of API Gateways


AJAX (Asynchronous JavaScript and XML) development techniques became ubiquitous during this time. By decoupling data interchange from presentation, AJAX created much richer user experiences for end users. This architecture also created much “chattier” clients, as these clients would constantly send and receive data from the web application. In addition, ecommerce during this era was starting to take off, and secure transmission of credit card information became a major concern for the first time. Netscape introduced Secure Sockets Layer (SSL) -- which later evolved to Transport Layer Security (TLS) -- to ensure secure connections between the client and server. These shifts in networking -- encrypted communications and many requests over longer lived connections -- drove an evolution of the edge from the standard hardware/software load balancer to more specialized application delivery controllers (ADCs). ADCs included a variety of functionality for so-called application acceleration, including SSL offload, caching, and compression. This increase in functionality meant an increase in configuration complexity.


Adapting Cloud Security and Data Management Under Quarantine

Image: WrightStudio - stock.Adobe.com
The current state of affairs is not something envisioned by many business continuity plans, says Wendy Pfeiffer, CIO of Nutanix. Most organizations are operating in a hybrid mode, she says, with infrastructure and services running in multiple clouds. This can include private clouds, SaaS apps, Amazon Web Services, Microsoft Azure, and Google Cloud Platform. Though this specific situation may not have been planned for, the cloud allows for unexpected needs to scale and pivot, Pfeiffer says. “Maybe we envisioned a region being inaccessible but not necessarily every region all at once.” Normally it can be easy to declare standards within IT, she says, and instrument an environment to operate in line with those standards to maintain control and security. Losing control of that environment under quarantines can be problematic. “If everyone suddenly pivots to work from home, then we no longer control the devices people use to access the network,” Pfeiffer says. Such disruption, she says, makes it difficult to control performance, security, and the user experience.


While 78 per cent of organisations said they are using more than 50 discrete cybersecurity products to address security issues, 37 per cent used more than 100 cybersecurity products. Organisations who discovered misconfigured cloud services experienced 10 or more data loss incidents in the last year, according to the report. IT professionals have concerns about cloud service providers. Nearly 80 per cent are concerned that cloud service providers they do business with will become competitors in their core markets. "Seventy-five per cent of IT professionals view public cloud as more secure than their own data centres, yet 92 per cent of IT professionals do not trust their organization is well prepared to secure public cloud services," the findings showed. Nearly 80 per cent of IT professionals said that recent data breaches experienced by other businesses have increased their organization's focus on securing data moving forward.


Continuous Security Through Developer Empowerment

Continuous Security Through Developer Empowerment
Before DevOps kicked in, app performance monitoring (APM) was owned by IT, who used synthetic measurements from many points around the world to assess and monitor how performant an application was. These solutions were powerful, but their developer experience was horrible. They were expensive, which limited tests developers could run. They excelled in explaining the state through aggregating tests, but offered little value to a developer trying to troubleshoot a performance problem. As a result, developers rarely used them. Then, New Relic came on the scene, introducing a different approach to APM. Their tools were free or cheap to start with, making it accessible to all dev teams. They used instrumentation to offer rich results in developer terms (call stacks, lines of code), making them better for fixing problems. This new approach revolutionized the APM industry, embedded performance monitoring into dev practices and made the web faster. The same needs to happen for application security.


Data security guide: Everything you need to know

The move to the cloud presents an additional threat vector that must be well understood in respect to data security. The 2019 SANS State of Cloud Security survey found that 19% of survey respondents reported an increase in unauthorized access by outsiders into cloud environments or cloud assets, up 7% since 2017. Ransomware and phishing also are on the rise and considered major threats. Companies must secure data so that it cannot leak out via malware or social engineering. Breaches can be costly events that result in multimillion-dollar class action lawsuits and victim settlement funds. If companies need a reason to invest in data security, they need only consider the value placed on personal data by the courts. Sherri Davidoff, author of Data Breaches: Crisis and Opportunity, listed five factors that increase the risk of a data breach: access; amount of time data is retained; the number of existing copies of the data; how easy it is to transfer the data from one location to another -- and to process it; and the perceived value of the data by criminals.


This new, unusual Trojan promises victims COVID-19 tax relief


The malware is unusual as it is written in Node.js, a language primarily reserved for web server development. "However, the use of an uncommon platform may have helped evade detection by antivirus software," the team notes. The Java downloader, obfuscated via Allatori in the lure document, grabs the Node.js malware file -- either "qnodejs-win32-ia32.js" or "qnodejs-win32-x64.js" -- alongside a file called "wizard.js." Either a 32-bit or 64-bit version of Node.js is downloaded depending on the Windows system architecture on the target machine. Wizard.js' job is to facilitate communication between QNodeService and its command-and-control (C2) server, as well as to maintain persistence through the creation of Run registry keys. After executing on an impacted system, QNodeService is able to download, upload, and execute files; harvest credentials from the Google Chrome and Mozilla Firefox browsers, and perform file management. In addition, the Trojan can steal system information including IP address and location, download additional malware payloads, and transfer stolen data to the C2.


Quantum computing analytics: Put this on your IT roadmap


"There are three major areas where we see immediate corporate engagement with quantum computing," said Christopher Savoie, CEO and co-founder of Zapata Quantum Computing Software Company, a quantum computing solutions provider backed by Honeywell. "These areas are machine learning, optimization problems, and molecular simulation." Savoie said quantum computing can bring better results in machine learning than conventional computing because of its speed. This rapid processing of data enables a machine learning application to consume large amounts of multi-dimensional data that can generate more sophisticated models of a particular problem or phenomenon under study. Quantum computing is also well suited for solving problems in optimization. "The mathematics of optimization in supply and distribution chains is highly complex," Savoie said. "You can optimize five nodes of a supply chain with conventional computing, but what about 15 nodes with over 85 million different routes? Add to this the optimization of work processes and people, and you have a very complex problem that can be overwhelming for a conventional computing approach."


COBIT Tool Kit Enhancements

The value of this tool is that it provides a convenient means of quickly assessing and assigning relevant roles to practices across the 40 COBIT objectives. COBIT promotes using a common language and common understanding among practitioners. Common terminology facilitates communication and mitigates opportunities for error. Using RACI charts and the new COBIT Tool Kit spreadsheet provides the guidance to help practitioners extract the COBIT practices relevant for each job role. Another benefit of compiling all practices into a single RACI chart is that metrics reporting can be better assessed. A user can filter all practices by accountability of a single role and then compare metrics reporting on those practices and determine whether sufficient coverage has been created. An assessment of that type is not as effective when RACIs are developed at the higher, objective, level. The new spreadsheet can be found in the complementary COBIT 2019 Tool Kit. The tool kit is available on the COBIT page of the ISACA website.


Build your own Q# simulator – Part 1: A simple reversible simulator


Simulators are a particularly versatile feature of the QDK. They allow you to perform various different tasks on a Q# program without changing it. Such tasks include full state simulation, resource estimation, or trace simulation. The new IQuantumProcessor interface makes it very easy to write your own simulators and integrate them into your Q# projects. This blog post is the first in a series that covers this interface. We start by implementing a reversible simulator as a first example, which we extend in future blog posts. A reversible simulator can simulate quantum programs that consist only of classical operations: X, CNOT, CCNOT (Toffoli gate), or arbitrarily controlled X operations. Since a reversible simulator can represent the quantum state by assigning one Boolean value to each qubit, it can run even quantum programs that consist of thousands of qubits. This simulator is very useful for testing quantum operations that evaluate Boolean functions.


Diligent Engine: A Modern Cross-Platform Low-Level Graphics Library

This article describes Diligent Engine, a light-weight cross-platform graphics API abstraction layer that is designed to solve these problems. Its main goal is to take advantages of the next-generation APIs such as Direct3D12 and Vulkan, but at the same time provide support for older platforms via Direct3D11, OpenGL and OpenGLES. Diligent Engine exposes common C/C++ front-end for all supported platforms and provides interoperability with underlying native APIs. It also supports integration with Unity and is designed to be used as graphics subsystem in a standalone game engine, Unity native plugin or any other 3D application. ... As it was mentioned earlier, Diligent Engine follows next-gen APIs to configure the graphics/compute pipeline. One big Pipelines State Object (PSO) encompasses all required states (all shader stages, input layout description, depth stencil, rasterizer and blend state descriptions, etc.). This approach maps directly to Direct3D12/Vulkan, but is also beneficial for older APIs as it eliminates pipeline misconfiguration errors.



Quote for the day:


"Different times need different types of leadership." -- Park Geun-hye


Daily Tech Digest - May 14, 2020

10 things you thought you knew about blockchain that are probably wrong

Blockchain technology on blue background
Blockchain and DLT mean the same thing: Not so much. A blockchain is just one type of DLT. There are many such technologies, and not all of them are blockchains. Just like using the term Xerox to describe all photocopies, "blockchain" is being used to refer to all types of DLTs regardless of underlying technology or architecture but, at this point in the technology's evolution, it's a distinction without a difference, Bennett said. This is why the report itself references all DLTs as blockchains. ... Blockchains will eliminate the need for intermediaries in transactions: While they may change the role of these individuals and organizations, DLTs will not eliminate the role they play in facilitating, verifying, or closing transactions. "The only way to cut out third parties is for a consumer or business to interact with a blockchain directly," the report said. "But even in scenarios where ecosystem partners deal directly with each other at the expense of existing third parties, it doesn't mean third parties will no longer be part of the mix. And let's not forget that the world of cryptocurrencies is full of trusted third parties in the shape of wallet providers and cryptocurrency exchanges."



A Hybrid Approach to Database DevOps


Redgate’s state-based deployment approach uses a schema comparison engine to generate a ‘model’ of the source database, from the DDL (state) scripts, and then compares this to the metadata of a target database. It auto-generates a single deployment script that will make the target the same as the source, regardless of the version of the source and target. If the target database is empty, then the auto-generated script will contain the SQL to create all the required objects, in the correct dependency order, in effect migrating a database at version “zero”, to the version described in the source. This approach works perfectly well for any development builds where preserving existing data is not required. If the current build becomes a candidate for release, and we continue with the same approach, then the tool would generate a deployment script that will modify the schema of any target database so that it matches the version represented by the release candidate. However, if the development involves making substantial schema alterations, such as to rename tables or columns, or split tables and remodel relationships then it will be impossible for the automated script to understand how to make them while preserving existing data.


Health Data Breach Update: What Are the Causes?

Health Data Breach Update: What Are the Causes?
Security and privacy teams need to be ready to deal with staff departures, security experts say. "We cannot presume to know the reason for the doctor moving to a different organization, but what is often not mentioned in any type of privacy or security training is 'whose information is it, anyway?''' says Susan Lucci, senior privacy and security consultant at tw-Security. "Some providers may assume that once they treat patients, they have rights to all their information. It appears that in this case, the physician downloaded only information that would be beneficial to alert the patient of the physician's new practice, not that it was downloaded for continuity of care. The personally identifiable information belongs to the facility, and they have a duty to protect it. Release of any confidential information must take place through appropriate channels and authorization." As healthcare entities and their vendors continue to deal with the COVID-19 crisis, new circumstances for breaches could emerge, some experts note.


Why Data Quality Is Critical For Digital Transformation

Bi gData
Often in the case of mergers, companies struggle the most with the consequences of poor data. When one company’s Customer Relationship Management (CRM) system is messed up, it affects the entire migration process – where time and effort is supposed to be spent in understanding and implementing the new system, it’s spent in sorting data!  What exactly constitutes poor data? Well, if your data suffers from: Human input error such as spelling mistakes, typos, upper- and lower-case issues, lack of consistency in naming conventions across the data set; Inconsistent data format across the data set such as phone numbers with and without a country code or numbers with punctuation; Address data that is invalid or incomplete with missing street names or postcodes; and Fake names, addresses or phone numbers …then it’s considered to be flawed data.  These are considered surface issues that are inevitable and universal – as long as you have humans formulating and inputting the data errors will occur. 


Cisco, others, shine a light on VPN split-tunneling

VPN / network security
Basically split tunneling is a feature that lets customers select specific, engerprise-bound traffic to be sent through a corporate VPN tunnel. The rest goes directly to the Internet Service Provider (ISP) without going throuogh the tunnel. Otherwise all traffic, even traffic headed for sites on the internet, would go through the VPN, through enterprise security measures and then back out to the internet.The idea is that the VPN infrastructure has to handle less traffic, so it performs better. Figuring out what traffic can be taken out of the VPN stream can be a challenge that Cisco is trying to address with a relatively recent product. It combines tellemetry data gathered by Cisco AnyConnect VPN clients with real-time report generation and dashboard technology from Splunk.Taken together the product is known as Cisco Endpoint Security Analytics (CESA) and is part of the AnyConnect Network Visibility Module (NVM). Cisco says that until July 1, 2020, CESA trial licenses are offered free for 90 days to help IT organizations with surges in remote working.


How to control access to IoT data

How to control access to IoT data image
Companies also shouldn’t forget to consider security measures that they have in place for other areas of the business, and think twice before relying on settings already applied to devices without checking. “IT teams cannot forget to apply basic IT security policies when it comes to controlling access to IoT generated data,” Simpson-Pirie continued. “The triple A process of access, authentication and authorisation should be applied to every IoT device. It’s imperative that each solution maintains a stringent security framework around it so there is no weak link in the chain. “Security has long been a second thought with IoT, but the stakes are too high in the GDPR era to simply rely on default passwords and settings.” Security is, by no means, the only important aspect to consider when controlling access to IoT data; there are also the matters of visibility, and having a backup plan for when security becomes weakened. For Rob McNutt, CTO at Forescout, the latter can come to fruition by segmenting the network. “Organisations need to have full visibility and control over all devices on their networks, and they need to segment their network appropriately,” he said.


Nvidia & Databricks announce GPU acceleration for Spark 3.0


The GPU acceleration functionality is based on the open source RAPIDS suite of software libraries, themselves built on CUDA-X AI. The acceleration technology, named (logically enough) the RAPIDS Accelerator for Apache Spark, was collaboratively developed by Nvidia and Databricks (the company founded by Spark's creators). It will allow developers to take their Spark code and, without modification, run it on GPUs instead of CPUs. This makes for far faster machine learning model training times, especially if the hardware is based on the new Ampere-generation GPUs, which by themselves offer 5-fold+ faster training and inferencing/scoring times than their Nvidia Volta predecessors. Faster training times allow for greater volumes of training data, which is needed for greater accuracy. But Nvidia says the RAPIDS accelerator also dramatically improves the performance of Spark SQL and DataFrame operations, making the GPU acceleration benefit non-AI workloads as well. This means the same Spark cluster hardware can be used for both data engineering/ETL workloads as well as machine learning jobs.


Nation state APT groups prefer old, unpatched vulnerabilities


“The recent diffusion of smart working increased enormously the adoption of SaaS solutions for office productivity, customer service, financial administration, and other processes. This urgency also increased as well the exposure of misconfigured or too permissive rights. All this has been leveraged by attackers to their advantage,” he said. “A solid vulnerability management, detection, and response workflow that included the ability to validate cloud security posture and compliance with CIS benchmarks – while shortening the Time To Remediate (TTR) would have been a great help for security teams,” said Rottigni. “The mentioned vulnerabilities made their ways in these sad hit parades as the most exploited ones: a clear indicator of the huge room for improvement that organisations still have.” “They can achieve this with properly orchestrated security programs, leveraging SaaS solutions that have the fastest adoption path, the shortest learning curve and the highest success rate in risk mitigation due to their pervasiveness across the newest and widest digital landscapes.”


Evolving IT into a Remote Workforce

Image: REDPIXEL - stock.adobe.com
When remote work initiatives first began rolling out 20 years ago, I recall a telecom sales manager telling me that six months after he'd deployed his sales force to the field where they all worked out of home offices, he discovered a new problem: He was losing cohesion in his salesforce. “Employees wanted to come in for monthly meetings,” he said. “It was important from a team morale standpoint for them to interact with each other, and for all of us to remind each other what the overall corporate goals and sales targets were.” The solution at that time was to create monthly on-prem staff meetings where everyone got together. A similar phenomenon could affect IT workforces that take up residence in home offices to perform remote work. There could be breakdowns in IT project cohesion without the benefit of on-prem “water cooler” conversations and meetings that foster lively information exchanges. In other cases, there could be some employees who don't perform as well in a home office as they would in the company office. IT managers are likely to find that their decisions on what IT can be done remotely will be based on not only what they could outsource, but also whom.


AI: A Remedy for Human Error?


Humans are naturally prone to making mistakes. Such errors are increasingly impactful in the workplace, but human error in the realm of cybersecurity can have particularly devastating and long-lasting effects. As the digital world becomes more complex, it becomes much tougher to navigate – and thus, more unfair to blame humans for the errors they make. Employees should be given as much help and support as possible. But employees are not often provided with the appropriate security solutions, so they resort to well-intentioned workarounds in order keep pace and get the job done. As data continues to flow faster and more freely than ever before, it becomes more tempting to just upload that document from your personal laptop, or click on that link, or share that info to your personal email. Take, for instance, one of the most common security problems: phishing emails. An employee might follow instructions in a phishing email not only because it looks authentic, but that it conveys some urgency. Employee training can help reduce the likelihood of error, but solving the technological shortcoming is more effective: if a phishing email is blocked from delivery in the first place, we can help mitigate the human error factor.



Quote for the day:


"Leadership is intangible, and therefore no weapon ever designed can replace it." -- Omar N. Bradley