Daily Tech Digest - September 22, 2019

The augmented city: how technologists are transforming the Earth into theater

The Augmented City, by Scape Technologies
Want incredible immersive experiences in your city? Remaining technological hurdles include saving digital content to location, accessing a real-time 3D semantic world map, occlusion of digital content with the physical world, and multi-player. Centimeter positioning is required. However, Global Navigation Satellite Systems (GNSS) such as BeiDou, Galileo, and GPS fail to achieve this without the software and hardware to tap into geodetic infrastructure. Advancing capabilities of consumer cameras, leveraging dual raw GNSS data, 5G networks, and computer vision offer potential solutions, including triangulating position from landscape images snapped from a smartphone. 2019, Buckingham Palace, London. The palace is one of the locations augmented, or enabled, or activated, via Snap’s Lens Studio Landmarker enabling real-time AR immersive experiences. Studio has achieved over 400,000 AR lens and 15 billion plays. Snap currently has a US market catchment of 90% of 13-to 24-year-olds, a higher share than Facebook or Instagram.


Origins of Enterprise Architecture Frameworks

Origins of Enterprise Architecture Framework.jpg
Over the last thirty years, one EA framework has risen to become the most popular EA framework. That framework is The Open Group Architecture Framework, or TOGAF. The Open Group was formed in 1988 as a result of the merger of The Open Software Foundation and X/Open Company. The mission was to form a consortium that seeks to enable the achievement of business objectives through the development of open, vendor-neutral technology standards. The Open Group grew to over 650 active members who create standards for the field of computer engineering. Through this effort the Open Group created ArchiMate, a model that breaks down systems into active structures, passive structures or behaviors. TOGAF is currently in its tenth version, but the most widely recognizable feature of The Open Group’s TOGAF is the ADM, or Architecture Development Model. This model uses a cyclical approach to the development of an architecture. The cycle consists of developing a vision; defining the business, application, data, and technology domains; planning; managing change; deploying; and governing the architecture while maintain the requirements as a central focal point.


Make Artificial Intelligence Work for Your Business Needs

Image: Shutterstock
Enterprises beginning their AI journeys often rely on the services of the software provider or an AI development company to provide necessary customization. Some organizations, however, attempt to tackle the work in house, often with mixed results. "Having internal AI capability -– a combination of talent, platforms, tools, knowledge, relationship, and data -– offers the option of doing it internally versus outsourcing," said Monika Wilczak, an advisory managing director in artificial intelligence at business services advisory EY. "The stronger the internal AI capability, and more mature the enterprise is around the application of AI as a strategy for growth, the more likely it is to use their own data scientists and application engineers for customization," she explained. Still, even enterprises with full-fledged AI development teams can find customization to be an expensive and time-consuming undertaking. "Customization of vendors’ AI products requires data class inclusiveness, controls to avoid data bias, and the availability of a sufficient volume of labeled data,"



How To Drive Innovation During A Recession

Fast-Fail Innovation is technically easy for us to do, but we have no idea if anyone will buy these ideas from our company. This is where entrepreneurs play. Here you must go to market to quickly test and learn. You expect to fail fast and often before succeeding with an offering that may literally be refined by your customers’ in-market feedback. Unfortunately, although this type of innovation can be done quickly and inexpensively, your team must be ready to experience many, many, maaaaaaany failures before they find a winning, new idea. Under the pressure of a recession, teams are afraid to fail for fear of losing their jobs, so they will actively avoid engaging in the very activity that makes this quadrant successful. ... Differentiation Innovation is technically difficult for us to do, but we know our customers really want it. We know this because we can measure which problems to fix first, second and third. We can measure the size of each opportunity. We can measure the price customers will pay us if we address a specific need or problem.


Microsoft: Cyberattacks now the top risk, say businesses


This year, the second most widely considered top-five risk is economic uncertainty, followed by brand damage, regulation, and loss of key personnel.  The World Economic Forum (WEF) 2019 Global Risks Report ranks data theft and cyberattacks as top-5 risks in terms of likelihood, but they are behind extreme weather events and climate change concerns. Of course, since 2017 the world has seen the damage caused by the WannaCry ransomware outbreak, which the US government blamed on North Korea. It was shortly followed by the hugely costly NotPetya malware, which was blamed by governments in the West on Kremlin hackers. Criminal ransomware attacks continue to strike targets too, such as the attack on Norsk Hydro earlier this year that cost it $40m. And over the past few months, multiple US local governments have weathered targeted ransomware attacks with at least one attacker demanding a ransom payment of $5.3m. Lately, universities across the West have come under fire from state-sponsored hacking groups in search of intellectual property. However, these days business email compromise (BEC) is shaping up to be the most costly and common threat.


Facial recognition technology threatens to end all individual privacy

A surveillance camera
The consequences can be even more malign. Experts including the London police ethics panel argue that facial recognition could have a racial and gender bias. That is certainly what the American experience with this technology implies. The technology relies on sifting through the biometric data of thousands of people on criminal databases. But the datasets do not have enough information on racial minorities or women to be accurate. Many of these groups already have a deep mistrust of the police. Being wrongly targeted by a racially biased algorithm will not help this. And it is not just the state that is involved. An investigation by Big Brother Watch found that privately owned sites – including shopping centres, property developers, museums and casinos – have been using facial recognition, too. A trial in Manchester’s Trafford Centre scanned more than 15 million faces before ultimately being stopped in its tracks by the surveillance camera commissioner. ... Sadly, the high court in Wales did not grasp the conflict with civil liberties, recently ruling that a facial recognition trial by South Wales police was legal.


How a hacked Jeep Cherokee led to increased security from cyber carjackers


Harman saw its Jeep hack experience as a viable business opportunity: the supplier today sells cybersecurity software that allows automakers to monitor their fleets and provide over-the-air software updates. Analysts at IHS Markit consider Harman one of the top players in that segment, with some 20 automakers using its over-the-air services. Harman does not break out revenue for that business. But the company does try to recover some costs by charging higher prices for advanced security. "We have to educate our sales people in conversations with carmakers' purchasing departments and say 'don't let this go without adding cybersecurity to your quote'," said Amy Chu, Harman's senior director of automotive product security. Asaf Atzmon, the Israel-based vice president and general manager for automotive cybersecurity, said Harman has come a long way since he joined in March 2016 as part of the TowerSec deal. At the time, Harman employed only some security architects, and the company later changed its organizational structure, appointing or hiring professionals such as Wood and Chu to oversee cybersecurity efforts, Atzmon said.


Shared resources enable greater collaboration: big science in the cloud

Data science
The experience in developing DataLabs has provided a springboard for rolling out similarly collaborative platforms such as solutions supporting the Data and Analytics Facility for National Infrastructure (DAFNI). This is a project that aims to integrate advanced research models with established national systems for modelling critical infrastructure. “Led by Oxford University and funded by the EPSRC, the initiative aspires over the next 10 years to be able to model the UK at a household level, 50 years into the future,” explains Nick Cook, a senior analyst at Tessella. Here, the firm is involved in conceptualizing DAFNI’s capabilities and implementation roadmap. One of the project’s early goals is to create a “digital twin” of a UK city such as Exeter – in other words, to virtually describe a city with a population of several hundred thousand people together with its transport infrastructure, utility services and environmental context. This digital twin would, for example, help planners to decide where to invest in new road or rail networks, and to identify the best sites for housing, schools and doctors’ surgeries.


Automation in the workplace could disproportionately affect women


It wouldn’t be unprecedented. Decades ago, roles like “social media manager” and “data scientist” hadn’t been conceived, much less sought after. Krishnan said that typically, roughly 10% of employment at any given time is in these newly emerged groups of occupations, amounting to 160 million jobs globally. Whether they take up new work or acquire new skills in their current fields, Krishnan anticipates that tens of millions of workers will have to make some sort of occupational transition by 2030. Many of those workers are women — as many as 40 million to 160 million globally. Encouragingly, in both developed and emerging markets, the new jobs that are expected to come into vogue are likely to be higher-wage, according to Krishnan. Those jobs will furthermore involve less drudgery, which will be traded for tasks ostensibly more socially and intellectually stimulating. In fact, Krishnan believes that this future of work will require more interpersonal know-how of the workers who occupy its roles.


How Artificial Intelligence is Changing the Landscape of Digital Marketing

How Artificial Intelligence is Changing the Landscape of Digital Marketing
Artificial intelligence tools help digital marketers to understand customer behavior and make the right recommendations at the right time. A tool with the millions of predefined conditions knows how customers react to a particular situation, ad copy, videos or any other touchpoint. While humans can’t assess the large set of data better than a machine in a limited timeframe. You can collect the insights on your fingertips with the help of AI. Where to find an audience? how to interact with them? What to send them? How to send them? What is the right time to connect? When to send a follow-up? All these answers lie in the AI-powered digital marketing platforms. With a smart analysis pattern AI, tools can make better suggestions and help in decision making. A personalized content recommendation to the right audience at the right time guarantees the success of any campaign. Digital marketers are really getting pushed harder to demonstrate the success of content and campaigns. With AI tools utilization of potential data is very easy and effective.



Quote for the day:


"We can't understand someone else's ideas while we're busy thinking about our own." -- Tim Fargo


Daily Tech Digest - September 21, 2019

The Carbon Cost Of Digital Tech

Carbon cost of digital tech
“The cloud is more efficient, but that doesn’t automatically mean it’s more environmentally friendly,” Adams points out. “A less efficient data centre running on renewables will almost always be a better choice, environmentally speaking, than an efficient one that uses coal.” For the last 10 years, the Green Web Foundation has maintained the world’s largest database of website and digital providers using renewable power, making it easier for companies to find greener options. Becoming a sustainable, environmentally positive business isn’t just about decarbonisation. It’s equally as important, for example, to think about where investment is going. Take Google, a company that has taken significant steps to run data centres more efficiently. Last year, the tech giant was the world’s largest corporate buyer of renewable energy, and reached its goal of sourcing 100 per cent of its energy from renewable sources in 2017. Despite this, Google is far from perfect. “Google is funding climate deniers in the US, and supporting politicians who consistently cripple effective legislation in other sectors, as well as aggressively chasing business in the oil and gas sector,” Adams says.



The Struggles of Innovative People

Innovative people working together.
In his book, Adam Grant takes as an example a whole bunch of examples of social struggles in the USA, including among others the rights of black people and the fight for women's right to vote. He retains a pattern that comes up very often. First, people who would be described as radical and often violent highlight the problem. These are poorly monitored, misunderstood and fail, but not totally. Their message is heard by more moderate people, who want to go through significantly more legal means, and with a non-aggressive message. It is then that the people in power give in to their demands because the subject of the struggle is presented in a non-aggressive way, and heard by all. Unlike violent people. Some will argue that it is because violent people always threaten that governments negotiate with non-violent people. But from there we can learn two things. You have to get the message across while having people in power who fear something. It is the combination of fear + solution that makes people move.


Ransomware: 11 steps you should take to protect against disaster

Make money
It's not just criminal gangs that have noticed the power of ransomware: state-backed hacking groups have also used ransomware to create both chaos and profit for their backers. ... A recovery plan that covers all types of tech disaster should be a standard part of business planning, and should include a ransomware response. That's not just the technical response -- cleaning the PCs and reinstalling data from backups -- but also the broader business response that might be needed. Things to consider include how to explain the situation to customers, suppliers and the press. Consider whether regulators need to be notified, or if you should call in police or insurers. Having a document is not enough: you also need to test out the assumptions you have made, because some of them will be wrong. ... First, there's no guarantee that the criminals will hand over the encryption key when you pay up -- they are crooks, after all. If your organisation is seen to be willing to pay, that will probably encourage more attacks, either by the same group or others. There's also the broader impact to consider.


The Driver in the Driverless Car

This book came about from a simple observation. I noticed that even my techie friends in Silicon Valley were feeling overwhelmed by the pace of technological change. I also believe that the risks of letting technology just develop without thinking through societal implications is a massive problem - look at the rise of Facebook and all the problems that came because they refused to consider the privacy implications or the implications of their tools being hijacked for genocide and hate speech. ... Robots are good for three types of tasks: dirty, dangerous and dull jobs. Dirty jobs might be for example, cleaning out oil pipelines. Dangerous are jobs like bomb disposal or drones inspecting communications towers, infrastructure, or roof tops for faults and damage. Dull jobs are things like delivering food in a hospital or dispensing medicine. Curiously, autonomous vehicles are a great use case for robots - driving is both dangerous and dull. It’s also important to capture that a job can be both complex and based on repetition.


Adoption of AI Surveillance Technology Surges

Adoption of AI Surveillance Technology Surges
"Sadly I'm not surprised," says Alan Woodward, a computer science professor at the University of Surrey, commenting on the report's finding that AI surveillance technology is being rapidly embraced by governments. "Adoption of something this useful for security is bound to run ahead, and as is so often the case, particularly ahead of the legislation or regulation one might hope for." Technology, of course, is the limiting practical factor when designing more automated surveillance systems. But tool set capabilities and combinations have been rapidly improving. "Several breakthroughs are making new achievements in the field possible: the maturation of machine learning and the onset of deep learning; cloud computing and online data gathering; a new generation of advanced microchips and computer hardware; improved performance of complex algorithms; and market-driven incentives for new uses of AI technology," says Feldstein, who's also a nonresident fellow in Carnegie's Democracy, Conflict, and Governance Program. He formerly served as a deputy assistant secretary in the Democracy, Human Rights, and Labor Bureau at the U.S. Department of State.


How to better integrate IT security and IT strategy

How to better integrate IT security and IT strategy
“The human element is the biggest risk facing any IT organization today,” McGibney says. “A successful phishing campaign can easily bring a company to a screeching halt. To provide true defense in depth, IT and security need to work together to implement solutions across the attack surface, whether it be on-[premises] solutions or cloud-based. What the security group implements effects infrastructure and what infrastructure implements effects security. They truly go hand-in-hand.” IT and security teams need to understand what they are both trying to accomplish, and why it’s important to the organization, Wenzler says. “It’s easy to get risk strategies out of alignment with technology goals when the two sides don’t talk to each other,” he says. “While separate functions, they are integral to each other’s success, so without constant communication they’ll remain out of sync.” It’s also important for the two disciplines to build better relationships with each other. Information security people are sometimes seen as roadblocks to projects and hindering workflows, Cardamone says.


How a small business should respond to a hack

6 small business security
Responding to an attack starts long before it occurs. You should – if you haven't already – put in place an action plan for responding to an attack. All staff should know what is expected of them if the worst occurs, and particularly how to respond to customers who might be worried about their personal data being stolen. You should also prioritize the parts of your business that are most at risk during a cyberattack and focus your security measures on them. Many small businesses cannot afford to invest in sophisticated security measures for the whole of their IT infrastructure, but you can protect the systems and databases that contain the most sensitive information. Regular audits of the information you hold will also help you to identify exactly what has been stolen and will also help law enforcement track down the culprits. ... First, it's important that all of your staff know how to identify a hack at the earliest possible opportunity. If you can catch an attack whilst it is still in progress, all the better: this might allow law enforcement to identify the criminal immediately.


Java SE 13 adds performance, security, stability features

Gil Tene, CTO of Azul Systems, said he was skeptical of the sped-up Java release cadence, but the stability of the Java Community Process (JCP) along with the Java reference implementation and the Java Technology Compatibility Kit (TCK) gave him assurance that the speedier cadence would work just fine. Bruno Souza, president of SouJava, a Java user group based in Brazil, concurred. In moving to the six-month cadence, Oracle and the JCP kept its commitment to a fast, open source development model, but also kept the same standards of compatibility and quality, he said. OpenJDK, the open source version of Java SE, is where innovation will occur, Souza said. "And the TCK lets us verify that all these implementations run the same way," he added. Meanwhile, Java tools vendors have begun to take the Java release cadence into account for their products and services. For instance, Mala Gupta, a developer advocate at JetBrains, which produces the popular IDEA IntelliJ Java IDE, said JetBrains has a four-month release cycle that is tuned to keep up with each new release of Java.


Encrypted Smartphone Takedown Outed Canadian Mole

Report: Encrypted Smartphone Takedown Outed Canadian Mole
Bill Majcher, a former RCMP officer with extensive experience in conducting undercover operations, tells Global News that Ortis would have had access to almost any type of classified information, which the publication notes "could include the force's blueprints for covert operations worldwide, as well as the identities of undercover officers, police agents working within transnational crime groups, officers from Five Eyes partners used in RCMP probes, and even witnesses relocated to other countries." Lucki says the charges against Ortis have "shaken many people throughout the RCMP," as well as Canada's intelligence partners. "While these allegations, if proven true, are extremely unsettling, Canadians and our law enforcement partners can trust that our priority continues to be the integrity of the investigations and the safety and security of the public we serve," she says. The arrest of Ortis appears to have resulted from authorities taking down a secure smartphone service marketed to criminals.


Important Things You Need To Know About Agile Development

The sky is the limit with Agile development.
In the agile world, testing becomes a regular part of the process. Small pieces of the project are tested and presented on a regular basis. This gives everyone a better sense of the project’s timeline. In addition, this frequent testing allows developers to catch bugs before they become deeply entrenched in the code. ... Agile development does not mandate particular practices, but a number of standard practices have come into place as a result of the values of the agile model. One common example is paired programming. Here, two developers work together as they code a piece of the project. One programmer does the coding, and the other reviews the code as it is written. Another common agile practice is the daily standup meeting. Here, each team will give a status report to the project manager. Then, new goals are set for the current day. The final practice is working in sprints. Rather than set a long goal, many agile models encourage doing work in short bursts. Many offices use a two-week sprint model, where developers try to handle as many issues or address several user stories in a short period.



Quote for the day:


"Leaders must know where they are going if they expect others to willingly join them on the journey." -- Kouzes & Posner


Daily Tech Digest - September 20, 2019

Digitalization: Welcome to the City 4.0

Digitalization: Welcome to the City 4.0
Applied to cities, digitalization can not only improve efficiency by minimizing the waste of time and resources, but it will simultaneously improve a city’s productivity, secure growth, and drive economic activities. The Finnish capital of Helsinki is currently in the process of proving this. An early adopter of smart city technology and modeling, it launched the Helsinki 3D+ project to create a three-dimensional representation of the city using reality capture technology provided by the software company Bentley Systems for geocoordination, evaluation of options, modeling, and visualization.  The project’s aim is to improve the city’s internal services and processes and provide data for further smart city development. Upon completion, Helsinki’s 3-D city model will be shared as open data to encourage commercial and academic research and development. Thanks to the available data and analytics, the city will be able to drive its green agenda in a way that is much more focused on sustainable consumption of natural resources and a healthy environment.



How to decommission a data center

bulldozer wrecking ball deconstruct tear down decommission data center
"They need to know what they have. That’s the most basic. What equipment do you have? What apps live on what device? And what data lives on each device?” says Ralph Schwarzbach, who worked as a security and decommissioning expert with Verisign and Symantec before retiring. All that information should be in a configuration management database (CMDB), which serves as a repository for configuration data pertaining to physical and virtual IT assets. A CMDB “is a popular tool, but having the tool and processes in place to maintain data accuracy are two distinct things," Schwarzbach says. A CMDB is a necessity for asset inventory, but “any good CMDB is only as good as the data you put in it,” says Al DeRose, a senior IT director responsible for infrastructure design, implementation and management at a large media firm. “If your asset management department is very good at entering data, your CMDB is great. [In] my experience, smaller companies will do a better job of assets. Larger companies, because of the breadth of their space, aren’t so good at knowing what their assets are, but they are getting better.”


The Problem With “Cloud Native”

Digital data cloud, futuristic cloud with blockchain technology
The problem is thinking about and creating a common understanding around a change that big. Here the industry does itself no favors. For years, many people thought cloud technology was somehow part of the atmosphere itself. In reality, few things are so very physical: Big public cloud computing vendors like Amazon Web Services, Microsoft Azure, and Google Cloud each operate globe-spanning systems, with millions of computer servers connected by hundreds of thousands of miles of fiber-optic cable. Most people now know the basics of cloud computing, but understanding it remains a problem. Take a current popular term, “cloud native.” Information technologists use it to describe strategies, people, teams, and companies that “get” the cloud, and they use it for maximum utility. Others use it to describe an approach to building, deploying, and managing things in a cloud computing environment. People differ. Whether it’s referring to people or software, “cloud native” is shorthand for operating with the fullest power of the cloud.


Why You Need a Cyber Hygiene Program

cyber hygiene
Well-known campaigns and breaches either begin or are accelerated by breakdowns in the most mundane areas of security and system management. Unpatched systems, misconfigured protections, overprivileged accounts and pervasively interconnected internal networks all make the initial intrusion easier and make the lateral spread of an attack almost inevitable. I use the phrase “cyber hygiene” to describe the simple but overlooked security housekeeping that ensures visibility across the organization’s estate, that highlights latent vulnerability in unpatched systems and that encourages periodic review of network topologies and account or role permissions. These are not complex security tasks like threat hunting or forensic root cause analysis; they are simple, administrative functions that can provide value far in excess of more expensive and intrusive later-stage security investments. ... The execution of the most cyber hygiene falls squarely on the shoulders of the IT, network and support teams.


A Beginner's Guide to Microsegmentation

Image: knssr via Adobe Stock
Security experts overwhelmingly agree that visibility issues are the biggest obstacles that stand in the way of successful microsegmentation deployments. The more granular segments are broken down, the better the IT organization need to understand exactly how data flows and how systems, applications, and services communicate with one another. "You not only need to know what flows are going through your route gateways, but you also need to see down to the individual host, whether physical or virtualized," says Jarrod Stenberg, director and chief information security architect at Entrust Datacard. "You must have the infrastructure and tooling in place to get this information, or your implementation is likely to fail." This is why any successful microsegmentation needs to start with a thorough discovery and mapping process. As a part of that, organizations should either dig up or develop thorough documentation of their applications, says Stenberg, who explains that documentation will be needed to support all future microsegmentation policy decisions to ensure the app keeps working the way it is supposed to function.


Cryptoming Botnet Smominru Returns With a Vengeance

Cryptoming Botnet Smominru Returns With a Vengeance
Smominru uses a number of methods to compromise devices. For example, in addition to exploiting the EternalBlue vulnerability found in certain versions of Windows, it uses brute-force attacks against MS-SQL, Remote Desktop Protocol and Telnet, according to the Guardicore report. Once the botnet compromises the system, a PowerShell script named blueps.txt is downloaded onto the machine to run a number of operations, including downloading and executing three binary files - a worm downloader, a Trojan and a Master Boot Record (MBR) rootkit, Guardicore researchers found. Malicious payloads move through the network through the worm module. The PcShare open-source Trojan has a number of jobs, including acting as the command-and-control, capturing screenshots and stealing information, and most likely downloading a Monero cryptominer, the report notes. The group behind the botnet uses almost 20 scripts and binary payloads in its attacks. Plus, it uses various backdoors in different parts of the attack, the researchers report. Newly created users, scheduled tasks, Windows Management Instrumentation objects and services run when the system boots, Guardicore reports.


How to prevent lingering software quality issues


To build in quality, he advocates that IT undertake systematic approaches to software testing. In manufacturing, building in quality entails designing a process that helps improve the final product, while in IT that approach is about producing a higher-quality application. Yet, software quality and usability issues are, in many ways, harder to diagnose than problems in physical goods manufacturing. "In manufacturing, we can watch a product coming together and see if there's going to be interference between different parts," Gruver writes in the book. "In software, it's hard to see quality issues. The primary way that we start to see the product quality in software is with testing. Even then, it is difficult to find the source of the problem." Gruver recommends that software teams put together a repeatable deployment pipeline, which enables them to have a "stable quality signal" that informs the relevant parties as to whether the amount of variation in performance and quality between software builds is acceptable.


The arrival of 'multicloud 2.0'

The arrival of 'multicloud 2.0'
What’s helpful around the federated Kubernetes approach is that this architecture makes it easy to deal with multiple clusters running on multiple clouds. This is from using two major building blocks. First is the capability of syncing resources across clusters. As you may expect, this would be the core challenge for those deploying multicloud Kubernetes. Mechanisms within Kubernetes can automatically sync deployments on plural clusters, running on many public clouds. Second is intercluster discovery. This means the capability of automatically configuring DNS servers and load balancers with backends supporting all clusters running across many public clouds. The benefits of leveraging multicloud/federated Kubernetes include high availability, considering you can replicate active/active clusters across multiple public clouds. Thus, if one has an outage, the other can pick up the processing without missing a beat. Also, you avoid that dreaded provider lock-in. This considering that Kubernetes is the abstraction layer that’s able to remove you from the complexities and native details of each public cloud provider.


Microservices With Node.js: Scalable, Superior, and Secure Apps

Image title
Node.js is designed to build highly-scalable apps easier through non-blocking I/O and event-driven model that makes it suitable for data-centric and real-time apps. Node.js is highly suitable for real-time collaboration tools, streaming and networking apps, and data-intensive applications. Microservices, on the other hand, makes it easy for the developer to create smaller services that are scalable, independent, loosely coupled, and very suitable for complex, large enterprise applications. The nature and goal of both these concepts are identical at the core, making both suitable for each other. Together used, they can power highly-scalable applications and handle thousands of concurrent requests without slowing down the system. Microservices and Node.js have given rise to culture like DevOps where frequent and faster deliveries are of more value than the traditional long development cycle. Microservices are closely associated with container orchestration, or we can say that Microservices are managed by container platform, offering a modern way to design, develop, and deploy software.


Supply Chain Attacks: Hackers Hit IT Providers

Supply Chain Attacks: Hackers Hit IT Providers
Symantec says the group has hit at least 11 organizations, mostly in Saudi Arabia, and appears to have gained admin-level access to at least two organizations as part of its efforts to parlay hacks of IT providers into the ability to hack their many customers. In those two networks, it notes, attackers had managed to infect several hundred PCs with malware called Backdoor.Syskit. "This is an unusually large number of computers to be compromised in a targeted attack," Symantec's security researchers say in a report. "It is possible that the attackers were forced to infect many machines before finding those that were of most interest to them." Backdoor.Syskit is a Trojan, written in Delphi and .NET, that's designed to phone home to a command-and-control server and give attackers remote access to the infected system so they can push and execute additional malware on the endpoint, according to Symantec. The security firm first rolled out an anti-virus signature for the malware on Aug. 21. Symantec says attackers have in some cases also used PowerShell backdoors - also known as a living off the land attack, since it's tough to spot attackers' use of legitimate tools.



Quote for the day:


"A culture of discipline is not a principle of business; it is a principle of greatness." -- Jim Collins


Daily Tech Digest - September 19, 2019

Space internet service closer to becoming reality

Space internet service closer to becoming reality
Interestingly, though, a SpaceX filing made with the U. S. Federal Communication Commission (FCC) at the end of August, seeks to modify its original FCC application because of results it discovered in its initial satellite deployment. SpaceX is now asking for permission to “re-space” previously authorized, yet unlaunched satellites. The company says it can optimize its constellation better by spreading the satellites out more. “This adjustment will accelerate coverage to southern states and U.S. territories, potentially expediting coverage to the southern continental United States by the end of the next hurricane season and reaching other U.S. territories by the following hurricane season,” the document says. Satellite internet is used extensively in disaster recovery. Should SpaceX's request be approved, it will speed up service deployment for continental U.S. because fewer satellites will be needed. Because we are currently in a hurricane season (Atlantic basin hurricane seasons last from June 1 to Nov. 30 each year), one can assume they are talking about services at the end of 2020 and end of 2021, respectively.



Windows Defender malware scans are failing after a few seconds

The issue has been widely reported over the past two days on the Microsoft tech support forums, Reddit, and tech support sites like AskWoody, DeskModder, BornCity, and Bleeping Computer. The bug impacts Windows Defender version 4.18.1908.7 and later, released earlier this week. The bug was introduced while Microsoft tried to fix another bug introduced with the July 2019 Patch Tuesday. Per reports, the original bug broke "sfc /scannow," a command part of the Windows System File Checker utility that lets Windows users scan and fix corrupted files. After the July Patch Tuesday this utility started flagging some of Windows Defender's internal modules as corrupted, resulting in incorrect error messages that fooled admins into believing there was something wrong with their Windows Defender installation, and its updates. Microsoft announced a fix for the System File Checker bug in August, but the actual patch was delayed. When the fix arrived earlier this week, it didn't yield the expected results.


What does upstream and downstream development even mean?


If the flow of data goes toward the original source, that flow is upstream. If the flow of data goes away from the original source, that flow is downstream. ... The idea that either upstream or downstream could be superior depends on the commit. Say, for example, the developer of Application B makes a change to the application that adds a new feature unique to B. If this feature has no bearing on Application A, but does have a use in Application D, the only logical flow is downstream. If, on the other hand, the developer of Application D submits a change that would affect all other applications, then the flow should be upstream to the source (otherwise, the change wouldn't make it to applications B or C). ... An upstream flow of data has one major benefit (besides all forks gaining access to the commit). Let's say you're the developer of Application B and you've made a change to the core of the software. If you send that change downstream, you and the developer of D will benefit. However, when the developer of Application A makes a different change to the core of the software, and that change is sent downstream, it could overwrite the commit in Application B.


Soft Skills: Controlling your career

Projecting positivity is also a soft skill. The reality is that a busy IT department will achieve a lot and there is much to focus on. Of the technical people I know most are passionate about what they do. Passion drives excellence but it also has a dark side that we see manifest in various IT "religious wars". It narrows the focus, closes the mind and prevents us from acknowledging any evidence that contradicts our beliefs. Passion is also a big turn off for senior executives who tend to prefer calmness. It is difficult to get the balance right between passion & dispassion. The best advice I have been given is that it is OK to hold strong opinions but important to hold them loosely. By all means be passionate and use it to drive you to put forward the best possible case for your chosen subject but accept that others will have equally passionate views and either, or both, of you may be wrong. If you are not passionate then you won't put forward convincing arguments or test hypothesis with sufficient rigour.


Creating ASP.NET Core Application with Docker Support

Image 1
Docker contains Operating System, Source code, Environment variables (if any) and Dependent components to run the software. So, if anyone wants to run your software, they can simply take the container and get started, without putting effort to do the machine set up to make things work. ... Many times, you must have heard developers saying – it is working fine on my machine, but I don’t know what is missing on your machine or say why the same software is not working on your machine? Such discussions usually pop up during the testing phase and as my personal experience, sometimes it takes hours to identify that small missed out dependency. Here, Docker comes to the rescue. As it is containerization, each and every dependency is packed in the form of containers and is available for both Linux and Windows. Hence, everyone using the software will have the same environment. Basically, the concept of docker has completely vanished the problem of mismatch environments. Isn’t it amazing?


Why businesses would rather lose revenue than data


A big reason for cybersecurity issues is the lack of IT talent in SMBs, the report found. Half of businesses said they only provide a one-time security awareness IT training to staff. To solve for the skills gap, a third of companies (33%) said they currently outsource some of their IT activities, and another 40% said they plan to do so.  Regardless, SMBs need a plan. "With regards to addressing security concerns, it's important to have several layers of security so that there's no way an outside 'silver bullet' can penetrate a system," Claudio said. "Making sure staff are aware of potential security threats, like phishing scams, is also crucial as they will usually be your first line of defense. Patch management and vulnerability assessment are also mission critical." ...  "To support business continuity, it's important to have a great backup and disaster recovery program including off-site data copy in the event of an emergency," Claudio noted. "Again, making sure you have access to the right IT resources and skill sets by utilizing a trusted outsourced service provider is essential."


Oracle goes all in on cloud automation

Talk to the cloud: Oracle rolls out more conversational interfaces at OpenWorld 2019
“Digital assistants and conversational UI are going to transform the way we interact with these applications, and just make things a lot easier to deal with,” Miranda says. They will also enable supply chain managers to check on delivery status, track deviations and report incidents, Oracle’s goal being to enable root-cause analysis of supply chain problems via the chat interface. In HR, Oracle HCM Cloud will chat with employees about onboarding and accessing their performance evaluations, while sales staff will be able to configure quotes using voice commands, Oracle says. Oracle and Amazon are famously combative, but Oracle is starting to adopt the same terminology Amazon uses for its Alexa virtual assistant, referring to extended dialogs to accomplish a goal as “conversations” and tasks that its digital assistants can help with as “skills.” R. “Ray” Wang, founder and principal analyst at Constellation Research, says Oracle’s effort to weave AI into all its apps is paying off. ... “It’s the long-term performance improvement of feedback loops. The next best actions are more than rudimentary. Think of the Digital Assistants plus Intelligent Document Recognition, and predictive planning as all tools to help drive more automation and augmented decisions in enterprise apps.”


Strengthen Distributed Teams with Social Conversations

"Cognitive trust is based on the confidence you feel in another person’s accomplishments, skills, and reliability while affective trust, arises from feelings of emotional closeness, empathy, or friendship." In your team, trust might be developed and sustained between individuals in different ways. Some of you will be looking out for how much others fulfill their offer of help, whether they deliver their work on time, and if their work is of high quality. Meanwhile, others will be looking for a more personal or social connection, looking for things they have in common with others—which is easier to find out during real-time conversations. Getting to know each other well requires having a mental image of the person, hearing their voice, seeing their facial expressions, and online meetings can help us achieve this. In this article, I suggest two ways to use meetings to strengthen your team relationships—incorporate social conversations into your scheduled meetings and hold online meetings for the specific purpose of reconnecting as colleagues.


DevSecOps veterans share security strategy, lessons learned


Once DevOps and IT security teams are aligned, the most important groundwork for improved DevOps security is to gather accurate data on IT assets and the IT environment, and give IT teams access to relevant data in context, practitioners said. "What you really want from [DevSecOps] models is to avoid making assumptions and to test those assumptions, because assumptions lead to vulnerability," Vehent said, recalling an incident at Mozilla where an assumption about SSL certificate expiration data brought down Mozilla's add-ons service at launch. ... Once a strategy is in place, it's time to evaluate tools for security automation and visibility. Context is key in security monitoring, said Erkang Zheng, chief information security officer at LifeOmic Security, a healthcare software company, which also markets its internally developed security visibility tools as JupiterOne. "Attackers think in graphs, defenders think in lists, and that's how attackers win," Zheng said during a presentation here. "Stop thinking in lists and tables, and start thinking in entities and relationships."


Cisco spreads ACI to Microsoft Azure, multicloud and SD-WAN environments

access control / authentication / privileges / managing permissions
Key new pieces of ACI Anywhere include the ability to integrate Microsoft Azure clouds and a cloud-only implementation of ACI. Cisco has been working closely with Microsoft, and while previewing the Azure cloud support earlier this year it has also added Azure Kubernetes Service (AKS) to managed services that natively integrate with the Cisco Container Platform. With the Azure cloud extension the service uses the Cisco Cloud Cloud APIC, which runs natively in Azure public cloud to provide automated connectivity, policy translation and enhanced visibility of workloads in the public cloud, Cisco said. With new Azure extensions, customers can tap into cloud workloads through ACI integrations with Azure technologies like Azure Monitor, Azure Resource Health and Azure Resource Manager to fine-tune their network operations for speed, flexibility and cost, Cisco stated. As part of the Azure package, the Cisco Cloud Services Router (CSR) 1000V brings connectivity between on-premises and Azure cloud environments.




Quote for the day:

"The leadership team is the most important asset of the company and can be its worst liability" -- Med Jones


Daily Tech Digest - September 18, 2019

The Seven Patterns Of AI

The Seven Patterns of AI
From autonomous vehicles, predictive analytics applications, facial recognition, to chatbots, virtual assistants, cognitive automation, and fraud detection, the use cases for AI are many. However, regardless of the application of AI, there is commonality to all these applications. Those who have implemented hundreds or even thousands of AI projects realize that despite all this diversity in application, AI use cases fall into one or more of seven common patterns. The seven patterns are: hyperpersonalization, autonomous systems, predictive analytics and decision support, conversational/human interactions, patterns and anomalies, recognition systems, and goal-driven systems. Any customized approach to AI is going to require its own programming and pattern, but no matter what combination these trends are used in, they all follow their own pretty standard set of rules. ... While these might seem like discrete patterns that are implemented individually in typical AI projects, in reality, we have seen organizations combine one or more of these seven patterns to realize their goals. By companies thinking of AI projects in terms of these patterns it will help them better approach, plan, and executate AI projects. In fact, emerging methodologies are focusing on the use of these seven patterns as a way to expedite AI project planning.



Aliro aims to make quantum computers usable by traditional programmers


Stages of quantum computing are generally divided into quantum supremacy—the threshold at which quantum computers are theorized to be capable of solving problems, which traditional computers would not (practically) be able to solve—is likely decades away. While quantum volume, a metric that "enables the comparison of hardware with widely different performance characteristics and quantifies the complexity of algorithms that can be run," according to IBM, has gained acceptance from NIST and analyst firm Gartner as a useful metric. Aliro proposes the idea of "quantum value," as the point at which organizations using high performance computing today can achieve results from using quantum computers to accelerate their workload. "We're dealing with enterprises that want to get business value from these machines…. "We're not ready for many levels of abstraction above the quantum hardware, but we're ready for a little bit. When you get down to the equivalent of the machine language, these things are very, very different, and it's not just what kind of qubits they are. It's noise characteristics, it's connectivity," Ricotta said. "Riggeti and IBM Q machines both use superconducting Josephson junctions around the same number—approximately, the same order of magnitude of qubits—but they are connected in different ways ..."


New hacking group targets IT companies in first stage of supply chain attacks


In two of the attacks, researchers found that hundreds of computers were compromised with malware, indicating that the attackers were simply infecting all the machines they could throughout the organisations in order to find key targets. The most recently recorded activity from Tortoiseshell was in July 2019, with attacks by the group identified by a unique custom payload: Backdoor.Syskit. This malware is built in both Delphi and .NET programming languages and secretly opens an initial backdoor onto compromised computers, allowing attackers to collect information including the IP address, the operating system version and the computer name. Syskit can also download and execute additional tools and commands, and Tortoiseshell attacks also deploy several publicly available tools as information stealers to gather data on user activity. While it remains uncertain how the malware is delivered, researchers suggest that it could potentially be distributed via a compromised web server, because in one instance the first indication of malware on the network was a compromised web shell – something that can provide an easy way into a targeted network.


How Ransomware Criminals Turn Friends into Enemies

As someone whose job it is to learn as much as possible about the online criminal ecosystem, I often spot trends before they make mainstream headlines. This type of attack was high on my list of attacks likely to increase. Supply chain attacks aren't new. They've been increasing in frequency, however, and gaining more attention. While there are many types of supply chain attacks, this particular type — compromising a service provider to gain access to its customers — is becoming more popular among skilled ransomware crews. ... Managing IT can be hard, especially for small and midsize businesses lacking the necessary resources. It probably seemed like a great idea for these small dental practices to outsource IT to Digital Dental Record. They're not alone. The managed services industry is growing extremely fast with businesses struggling to manage the technology required to run a modern establishment. With attacks on MSPs on the rise, MSPs need to step up their security game, regardless of the kind of specialized services they provide.


AI in cyber security: a necessity or too early to introduce?

AI in cyber security: a necessity or too early to introduce? image
Dr Leila Powell, lead security data scientist from Panaseer, agrees that “the key challenge for most security teams right now is getting hold of the data they need in order to get even a basic level of visibility on the fundamentals of how their security program is performing and how they measure up against regulatory frameworks like GDPR. This is not a trivial task! “With access to security relevant data controlled by multiple stakeholders from IT to MSSPs and tool vendors there can be a lot of red tape on top of the technical challenges of bringing together multiple siloed data sources. Then there’s data cleaning, standardisation, correlation and understanding — which often require a detailed knowledge of the idiosyncrasies of all the unique datasets. “As it stands, once all that work has gone in to data collection, the benefits of applying simple statistics cannot be underestimated. These provide plenty of new insights for teams to work through — most won’t even have the resources to deal with all of these, let alone additional alerting from ML solutions.


2019 Digital operations study for energy

Looking ahead to the next five years, the picture improves somewhat and offers more hope for the utilities sector. For instance, of the EMEA utilities surveyed by Strategy&, 5 percent said they had already implemented AI applications and another 9 percent sa they had piloted such programs. That compares with 20 percent and 6 percent, respectively, for chemicals companies. But through 2024, including planned technologies, AI adoption in the utilities sector may increase by another 15 percent, according to the survey, and that would be on par with chemicals companies and just below oil and gas AI implementation. ... Many utilities make the mistake of trying to implement too many ambitious digital strategies at the same time and end up spreading their financial and staff resources, as well as their capabilities, too thin. A better approach is to define the three to five critical digitization efforts that are strategically essential to defending and expanding competitive advantage among startups and established power companies.


Microsoft brings IBM iron to Azure for on-premises migrations

Microsoft brings IBM iron to Azure for on-premises migrations
Under the deal, Microsoft will deploy Power S922 servers from IBM and deploy them in an undeclared Azure region. These machines can run the PowerVM hypervisor, which supports legacy IBM operating systems, as well as Linux. "Migrating to the cloud by first replacing older technologies is time consuming and risky," said Brad Schick, CEO of Skytap, in a statement. "Skytap’s goal has always been to provide businesses with a path to get these systems into the cloud with little change and less risk. Working with Microsoft, we will bring Skytap’s native support for a wide range of legacy applications to Microsoft Azure, including those dependent on IBM i, AIX, and Linux on Power. This will give businesses the ability to extend the life of traditional systems and increase their value by modernizing with Azure services." As Power-based applications are modernized, Skytap will then bring in DevOps CI/CD toolchains to accelerate software delivery. After moving to Skytap on Azure, customers will be able to integrate Azure DevOps, in addition to CI/CD toolchains for Power, such as Eradani and UrbanCode.


Prepare for cloud security and shared responsibility


IT infrastructure teams typically control the platform from the ground up and through the OS layer. Admins work with security teams to ensure platforms are hardened and adhere to compliance needs. After the platform is built, infrastructure and security teams turn it over to the dev or application owners for final installations and deployments. Application owners still work with an infrastructure team to ensure security and compliance measures are maintained through the deployment process. Ideally, the platform gets a final verification from the security team. The same parties will still be involved and maintain that level of ownership and responsibility even if an organization uses automation. But this process gets upended when a cloud provider gets involved. AWS manages the hypervisor, hardware and, in some cases, the OS. This means the deployment process starts in the middle of the traditional application lifecycle rather than at the beginning. Admins have to find a way to contribute in an ecosystem where the infrastructure is run by another party.


Digital dexterity: What it is, why your organization needs it, and how CIOs can lead the charge


If you're not sure what digital dexterity is, you aren't alone. Craig Roth, Gartner Research vice president, explained it as "the ability and ambition to use technology for better business outcomes."  That definition can still seem a bit fuzzy if you aren't sure where ability and ambition come in to the successful use of tech in business, but digging down just a bit helps make the whole thing more understandable. Helen Poitevin, vice president and analyst at Gartner, expands the definition of digital dexterity by adding that it's less about tech skills and more about "a specific set of mindsets, beliefs and behaviors." ... So, where does the CIO fit into all of this? They're basically the cornerstone of the entire concept, said Daniel Sanchez Reina, senior director and analyst at Gartner. "The CIO will play a key role in supporting desired behaviors and changing the processes, procedures, policies and management practices that shape how work gets done to encourage desired behaviors." It can be tough to transform an entire organization from one that resists, or at the very least grudgingly accepts, new technology. CIOs have a tough road ahead of them, but that doesn't mean it's impossible.


New ransomware strain uses ‘overkill’ encryption to lock down your PC


FortiGuard Labs says that 2048 and 4096 strings are generally more than adequate to encrypt and secure messages, and so the use of an 8192 size is "overkill and inefficient for its purpose." "Using the longer key size adds a large overhead due to significantly longer key generation and encryption times [...] RSA-8192 can only encrypt 1024 bytes at a time, even less if we consider the reserved size for padding," the researchers note. "Since the configuration's size will surely be more than that due to the fact that it contains the encoded private key, the malware cuts the information into chunks of 1000 (0x3e8) bytes and performs multiple operations of the RSA-8192 until the entire information is encrypted." The heavy use of encryption means that it is "not practically possible" to decrypt a compromised system, according to the cybersecurity firm. This is unfortunate, as decryption programs offered by cybersecurity firms can sometimes be the only way to recover files lost to ransomware infections without paying up.



Quote for the day:


"Don't measure yourself by what you have accomplished. But by what you should have accomplished with your ability." -- John Wooden