Daily Tech Digest - October 04, 2019

The decision to make an Android smartphone more than anything else reflects Microsoft's changing priorities. The Surface brand is a now a success, generating over a billion dollars in revenue per quarter. Perhaps the brand is strong enough that there is now demand for a phone-sized Surface device, even if it doesn't run on Windows like the rest of the line. I'm not entirely convinced that's the case, but there will be some enthusiasts who will want to be Surface users, from handheld device to massive collaboration screen. The other reason for a Surface phone is the success of Microsoft's app strategy, which has basically ensured that, even if you aren't using a Windows device, you can still get access to a wide range of Windows services. Microsoft, as my colleague Mary Jo Foley points out, already has over 150 apps in the Google Play app store. Having a phone to showcase those apps makes sense and may even encourage more developers to experiment with new versions that take advantage of those dual screens. Supporting those two strategies is a higher priority than trying to make Windows smartphones happen again.


Hard Fork on Blockchain
With the introduction of blockchain technology in enterprise software development, organizations are asking for guidance on how to deliver DevOps for blockchain projects. ... Blockchain applications are often designed to handle financial transactions, track mission-critical business processes, and maintain the confidentiality of their consortium members and the customers they serve. Software faults in these areas might, in extreme cases, represent a significant risk to an organization. As a result, blockchain applications usually demand more rigorous risk management and testing strategies than traditional software applications. A popular approach is to look at smart contract design much as you’d look at microservice design: Decompose the solution into its core entities and the processes that act on those entities, then develop discrete, composable smart contracts for each entity and process so they can evolve independently over time. 



Information security leaders are certainly aware of the potential hazards of insiders and have taken steps to mitigate the risk. Some 69% of organizations that suffered a data breach due to an insider threat said they did have a prevention solution in place at the time. As a result, 78% of the information security leaders acknowledged that their prevention strategies and solutions aren't sufficient to stop insider threats, including those with traditional data loss prevention (DLP) tools. "We're seeing companies empower their employees without the proper security programs in place, leaving companies in a heightened state of risk," Jadee Hanson, CISO and vice president of information systems of Code42, said in a press release. "In addition to enforcing awareness trainings, implementing data loss protection technologies and adding data protection measures to on- and off-boarding processes, organizations should not delay in launching transparent, cross-functional insider threat programs. Insider threats are real. Failing to act will only result in increasingly catastrophic data loss and breaches."


TinyML: The challenges and opportunities of low-power ML applications


TinyML can be used anywhere it’s difficult to supply power. “Difficult” doesn’t just mean that power is unavailable; it might mean that supplying power is inconvenient. Think about a factory floor, with hundreds of machines. And think about using thousands of sensors to monitor those machines for problems (vibration, temperature, etc.) and order maintenance before a machine breaks down. You don’t want to string the factory with wires and cables to supply power to all the monitors; that would be a hazard all its own. Ideally, you would like intelligent sensors that can send wireless notifications only when needed; they might be powered by a battery, or even by generating electricity from vibration. The smart sensor might be as simple as a sticker with an embedded processor and a tiny battery. We’re at the point where we can start building that. Think about medical equipment. Several years ago, a friend of mine built custom equipment for medical research labs. Many of his devices couldn’t tolerate the noise created by a traditional power supply and had to be battery powered.


5 technical capabilities required in modern enterprise data strategies

A businessman ascends a staircase surrounded by symbols of business and business data.
While Hadoop was the early winner in big data platforms, enterprises are investing in a mix of them today including Apache Spark, Apache Hive, Snowflake, multiple databases supported on AWS, Azure and Google Cloud Platform, and many others. Using multiple big data platforms creates significant challenges for CIO because attracting data and analytics-skilled people is highly competitive and managing numerous platforms adds operational and security complexities. While many enterprises are likely to consolidate to fewer data platforms as part of their strategy, they also must consider services, tools, partnerships and training to provide better support across several data platforms. Since large enterprises are unlikely to be able to centralize data in one data warehouse or data lake, then the need to establish a data catalog becomes even more strategically important. Data catalogs help end-users search, identify and learn more about data repositories that they can use for analytics, machine learning experiments and application development.


Modernize Your C# Code - Part IV: Types

The relevance of type information inspection / access at runtime increased leading to capabilities like reflection. While classic native systems usually have very limited runtime capabilities (e.g., C++), managed systems appeared with vast possibilities (e.g., JVM or .NET). Now one of the issues with this approach today is that many types are no longer originally coming from the underlying system - they come from deserialization of some data (e.g., incoming request to a web API). While the basic validation and deserialization could be coming from a type defined in the system, usually it comes just from a derivation of such a type (e.g., omitting certain properties, adding new ones, changing types of certain properties, ...). As it stands, duplication and limitations arise when dealing with such data. Hence, the need for dynamic programming languages, which offer more flexibility in that regard - at the cost of type safety for development. Every problem has a solution and in the last 10 years, we've seen new love for the type systems and type theory appearing all over the place.


DARPA looks for new NICs to speed up networks

A complex, complicated cloud.
The FastNICs programs will select a challenge application and provide it with the hardware support it needs, operating system software, and application interfaces that will enable an overall system acceleration that comes from having faster NICs. Researchers will design, implement, and demonstrate 10 Tbps network interface hardware using existing or road-mapped hardware interfaces. The hardware solutions must attach to servers via one or more industry-standard interface points, such as I/O buses, multiprocessor interconnection networks and memory slots to support the rapid transition of FastNICs technology. “It starts with the hardware; if you cannot get that right, you are stuck. Software can’t make things faster than the physical layer will allow so we have to first change the physical layer,” said Smith. The next step would be developing system software to manage FastNICs hardware. The open-source software based on at least one open-source OS would enable faster, parallel data transfer between network hardware and applications.


Why DevOps underscores the importance of software testing

Continuous testing is definitely a hot area right now. We hear a lot about it, when we're out and about speaking with customers and all. Obviously, you've got a variety of roles if you're going to make continuous testing work. And I know there's a lot of definitions out there, so maybe I should start with my definition, which is that continuous testing is really the practice of testing across the entire lifecycle. The goal there is to place testing and do testing at the right time in the right place, where you're going to uncover any defects or unexpected behaviors quickly, and resolve them, obviously, but most importantly help the business make good decisions. So, I think there's a lot of roles in that definition. You've obviously got your traditional software testers -- they might be manual, they might be automated, we can talk about manual versus automated -- they're obviously playing a critical role in continuous testing, but so are your developers. Because we really do expect and want developers to be involved in the testing process, [for] at least the unit tests level.


Chinese cyberespionage group PKPLUG uses custom and off-the-shelf tools

CSO slideshow - Insider Security Breaches - Flag of China, binary code
What makes this group stand apart is its use of both off-the-shelf and custom-made malware tools. This includes publicly available Trojan programs like PlugX -- from where the group’s name is derived -- and Poison Ivy. One of PKPLUG’s common tactics is to deliver the PlugX malware inside a ZIP archive that has the “PK” ASCII in its header. The group also makes heavy use of DLL side-loading to execute its malicious payloads. This type of attack occurs when a legitimate program searches for a DLL library by name in various locations, including the current folder, and automatically loads it in memory. If attackers replace the library with a malicious one, the malware will be loaded and executed instead. This decreases the payload’s chance of being detected, since the process that performs the loading is not malicious itself. The group favors spear-phishing emails to deliver their payloads and use social engineering to trick users into opening attachments. However, some limited use of Microsoft Office exploits has also been observed and so has the use of malicious PowerShell scripts.


Why TypeScript?

TypeScript compiler does not really mandate such type definition. IDE like Visual Studio and Angular CLI do care because they provide design time type checking and compiler type checking in restricted mode. The overhead of declaring types will be soon paid off, since you will have design time type checking and compiling time type checking which will boost your productivity when constructing complex structures and workflows. I am well aware there exist super smart JavaScript developers who can also construct complex structures and workflows at high productivity (including quality), however, I tend to think, probably they are among only top 1% of JavaScript developers in trade. Even if you are super smart, why would you consume your smarty to do type checking while IDE and compiler can do the type checking for you? One of the primary reasons why inventing TypeScript was for developing sophisticated tooling for software development.



Quote for the day:


"The essence of leadership is the willingness to make the tough decisions. Prepared to be lonely." - Colin Powell


Daily Tech Digest - October 03, 2019

Is hybrid cloud certification right for you?

cio certification college degree education graduation by cole keister via unsplash
One of the biggest mistakes a company could make, in Russell’s opinion, is having only one hybrid cloud expert. “You can have someone who acts as a catalyst – someone who is curious about the technology and gets you started. But the organization won’t survive well if only one person has the skill set. You need to have best practices for mindshare and knowledge transfer,” he says. Fuchs feels similarly: “We want to encourage purposeful cloud adoption.” NetApp holds workshops at customer sites to get stakeholders up to speed on the foundational aspects of hybrid cloud, as well as to provide specialized training for specific roles relative to the cloud such as how to best use analytics. “These decisions are getting more sophisticated and more data-driven because the tools are getting stronger, the processes are getting stronger, and education is getting stronger. Organizations are able to review their bills and try to reduce costs. The more trained your team is, the likelier they are to make good decisions,” he says. Williams recommends that anyone interested in gaining certification “should examine their own role in managing hybrid cloud operations and go after the certification that best supports the organization’s needs as well as their own,” she advises



Everything you need to know about Microsoft's dual-screen OS


For all intents and purposes, yes, Windows 10X is the official name for Windows Lite/Santorini. It is not a new operating system. It's Windows 10, in a more modular form, optimized for dual-screen/foldable devices. ... WCOS is one piece of the underpinnings of Windows 10X. In the past, I (and others) have described WCOS as the successor to Windows OneCore -- Microsoft's attempt to standardize a set of core components in Windows so that they would work across different types of devices. But WCOS is a combination of the OneCore OS pieces, UWP/Web and Win32 app packages, and the composable C-Shell. (See architectural diagram above.) Together, these are the foundational pieces of Windows 10X. ... As officials said today, Surface Neo, the dual-screen Surface device due around holiday 2020, will run Windows 10X. Any new dual-screen and foldable Windows devices from Microsoft partners like Dell, Lenovo, HP, Asus, and others also will likely ship with Windows 10X (and likely not before holiday 2020). Just to keep things confusing, the just-announced Arm-based Surface Pro X cannot run Windows 10X, despite the "X" in both product names.


Organizational vs. operational resilience: What's the difference?


Operational resilience examines what the business actually does and what it needs to continue performing those activities. This differs from organizational resilience in that OR looks at the entire organization, while OpR is more process-oriented, examining how the business functions and what the organization needs to protect those processes. What do businesses need to operate today? As with any business initiative, the push for OpR must start at the top. Senior management must be aware of the importance of maintaining OpR and must support initiatives such as the creation of policies, frameworks and structures that support OpR. These then filter down to operational teams to implement programs, controls and procedures to produce products and services. ... BC/DR, cybersecurity and supply chain initiatives are all essential building blocks for achieving organizational resilience as noted in the above figure.


It's time to change your cloud operating model

It's time to change your cloud operating model
As the organization moves to cloud computing, application workloads should be able to move directly to a new operating model. This is a big job and requires support for IT leadership. If your organization is so inclined, consider becoming a cloud center of excellence that many enterprises are building these days. Enterprises typically have a large backlog of applications—numbering in the thousands—that can move through an assessment and be mapped onto a new operational model. This means that a roadmap is created for how applications will be processed and operated in the public cloud. I’ve found that short enablement sprints are better than one long one; moreover, the teams learn a lot as they move applications through the new operational model. However, this is a disruptive change in workflow for most enterprises, with associated pain and costs. Many changes are necessary, including training, mentoring, coaching, knowledge sharing, and open-door policies to make this work. Finally, you need support from the boardroom. This is the only way you’ll be an organization that’s able to leverage the public cloud to a productive end.


How to Dynamically Build the UI in Blazor Components


You can, using familiar Razor tools when creating a View (or page), dynamically build your component's UI. Alternatively, you can also use the rendering tools built into Blazor to dynamically construct the UI that makes up your component at startup. I'm going to show how both of those options work in this column. That's not the same as manipulating your component's HTML as your component executes. For that you can use binding, buy a third-party component, or call out to jQuery through Blazor's JavaScript interop. But if you want to create an initial UI dynamically, here's how you'll do it. As my case study I'll use an (admittedly, contrived) View that contains multiple forms. In this case study, the Model object that's passed to this View contains an ArrayList of objects for a single customer. The ArrayList can contain any combination of different "customer related" objects: the customer's profile object, the customer's address object , the customer's billing plan and so on. In this View, we'll set up each object with a different form and each form will have a button that invokes a different C# method to handle processing that form.


The Flavors of APIs

Image title
“RESTful” (or “REST-like”) APIs are those which conform to all or most of the principles and constraints of REST, as defined by Roy Fielding in his 2000 dissertation titled “Architectural Styles and the Design of Network-based Software Architectures”. ... The HTTP methods are based off of verbs, which are accessing resources. The same way I would go to the store to get some groceries — a client goes to a location (URL) to get (method) a resource (URI). Everything on the web is a resource, and each resource has a uniform resource identifier. We use unified resource locators to find those resources. Finally, we use the methods to indicate what we want to do with those resources. In the example below, we’re using the HTTP GET method — to get the resource. ... gRPC builds on the traditional remote function or procedure calls utilized in systems of the past. Essentially, an RPC or RFC is a type of API that allows a function or procedure to be called as if it were local — despite that function or procedure living on a remote server. It leverages a form of a client-server model and incorporates the concept of a stub. gRPC takes this concept and optimizes it for modern cloud infrastructure.


Westcon-Comstor Builds a more Visible WAN

istock 1028077888
“We had too many site routers and we had a mix of aging and new infrastructure,” said Soler. “There were two pieces we were looking for: To improve resiliency in terms of failover and deliver resiliency to the business. SD-WAN was there. Players were already doing it and some of our partners were getting into the game.” Soler says that overall, the move to SD-WAN has made his life easier. He can see detailed reporting data about what’s happening everywhere on the network, all from a single screen. And with the new capabilities for failover, users don’t notice network outages, giving him more time to work behind the scenes. “There is failover redundancy so when something happens, we can focus on resolving the issues and our users don’t even notice,” he said. The most attractive features of the Silver Peak Unity EdgeConnect™ SD-WAN edge platform, according to Soler, are the ease of use in the deployment using centralized software-based orchestration, as well as the failover and performance features such as forward error correction FEC and path conditioning.


Minerva attack can recover private keys from smart cards, cryptographic libraries

Minerva attack
The Minerva attack at the heart of all these issues is a classic side-channel attack. A side-channel attack is when a third-party observes leaks in cryptographic operations that, when put together, can help the attacker break the encryption scheme and reconstruct the original data. This is what happens in Minerva, as well. The Czech team found a problem in the ECDSA and EdDSA algorithms used by the Atmel Toolbox crypto library to sign cryptographic operations on Athena IDProtect cards. These operations leaked "the bit-length of the scalar during scalar multiplication on an elliptic curve," researchers said. If an attacker is able to observe or record enough cryptographic operations signed by a vulnerable smart card or by one of the vulnerable open-source cryptographic libraries, then they'll be able to compute the private encryption keys that sign these operations. During tests, researchers said they only needed to record 11,000 operations (card swipes) from an Athena IDProtect card to obtain in private key. All this process took 30 minutes, researchers said.


Banking, Tech Communities Are ‘Breathless’ About Fintech, But Is It All Hype?

Banking, Tech Communities Are ‘Breathless’ About Fintech, But Is It All Hype?
“The deep-seated belief that cloud is insecure remains for a large swathe of bankers. It hasn’t helped that Capital One recently had a breach of their data in the [Amazon Web Services] cloud,” wrote fintech expert Alex Jimenez in a blog post. Lawrence White, professor of economics at New York University’s Leonard N. Stern School of Business, told InsideSources he thinks Deutsche Bank’s report exaggerates the impact of fintech on the banking community. New technology will improve existing banking processes, he said, but not fundamentally transform it the way tech experts say. “Yes there are some new entities in this lending world, what are called marketplace lenders, peer-to-peer type lending platforms, which have a little bit of a niche, but haven’t really eaten the lunch of the existing institutions,” White said. “As data gathering and analysis gets better, and the inexpensive transport of the data from one place to another [gets better], all of that makes this analysis more comprehensive and ought to make the banks better at what they’re doing. The world of Big Data brings more information and the need for greater analytical tools and techniques. At the end of the day, [banking is] basically the same process, trying to figure out who’s a good risk, who should I lend my money to?”


Q&A on the Book Managing Technical Debt

For many development projects, technical debt is discovered when symptoms of slowing development or defects point to workarounds or "fix me" comments in the code. It is important not to stop at the symptom, but to trace to the underlying software artifact so the technical debt item can be described and managed just like other software development issues. ... not all technical debt can be detected automatically. The number one step in recognizing technical debt successfully is to empower the development teams to concretely and openly share technical debt when they see it. ... We also advocate teams to make technical debt conversations as part of their routine review, retrospective and planning procedures. And of course, as we give many examples in the book, the most costly technical debt is the one that accumulates over time with an impact on the systems architecture, therefore having an architecture mindset; conducting design reviews and making architecture design trade-offs as explicit as possible will also help in uncovering existing technical debt as well as recognizing technical debt as teams are taking it on.



Quote for the day:


"Leaders need to strike a balance between action and patience." -- Doug Smith


Daily Tech Digest - October 02, 2019

U.S. Government Confirms New Aircraft Cybersecurity Move Amid Terrorism Fears

US-POLITICS-VOTE-TRUMP
Modern aircraft are essentially “flying data centers in the sky,” says Ian Thornton-Trump, security head at AMTrust Europe. “It's natural for the Air Force to apply its cyber defensive and offensive skills in order to insure the logistical and refuelling fleet is robust when it comes to physical and cybersecurity. I believe this is a great idea and the Airforce is about to pick up the cybersecurity ball after the FAA–for a lot of reasons–either dropped it or had it taken away.” He points out that the Airforce's mission of “fly, fight and win in air, space and cyberspace”’ cannot be achieved “if the civilian platforms they have prove vulnerable to cyberattack.” It’s a major issue—The consequences of cyberattacks targeting commercial aircraft could be “devastating” and put peoples’ lives in danger, says Andrea Carcano, co-founder of Nozomi Networks. “Airlines therefore need to develop security strategies where vulnerabilities are monitored and mitigated continuously.”



Why military minds should fill cybersecurity seats on corporate boards

Well this is not about appointing somebody to go through the techno-babble or the IT geekiness of it. It's really about understanding operational risk, and this is where veterans can come into play because veterans at a lot of levels, but really at the senior officer levels, understand operational risk and mission risk to mission. They're trained to understand technical issues. I'll take my background, for example, is with the US Navy. Ships are complex machines; they are whole mechanical and electrical systems. There are systems of systems that are embedded within these ships, and so it doesn't matter what your job is on board, you understand technical issues, and you understand how those systems play with each other to carry the whole. And so it's all about operational risk, and the senior ranks have extensive planning and strategy, the decision making experience that could benefit the board's oversight role. And again, getting back to the information and risk part, understanding and mitigating risks to the mission is a core competency in the military.


Singapore online falsehoods law kicks in with details on appeals process


The legislation was mooted as a way to"protect society" against online false news created by "malicious actors", which the Law Ministry said could be used to divide society, spread hate, and weaken democratic institutions. The government, however, was urged to make key amendments to better reassure the public it would not be used to stifle free speech, with several arguing that the act provided the government "far-reaching powers" over online communications. Industry players and observers expressed concerns the law would afford the Singapore government "full discretion" over whether a piece of content was deemed true or false. Under POFMA, two criteria requirements must be met for the law to apply: there must be a false statement of fact and it must be in the public interest to act. It also does not cover criticisms, satire or parody, and opinions. Comments on falsehoods also are excluded, though, the Law Ministry has cautioned that "care" should be exercised to "avoid repeating" the falsehood. It also assures that the act will not be used to punish people for sharing falsehoods "in ignorance [and] good faith".


The Inestimable Values of an Attacker's Mindset & Alex Trebek

(Image: Olga via Adobe Stock)
For three years, Pardee performed network analysis to include target characterization, exploitation usage, documentation, and exploit planning to help the intelligence agency extract insights from targets. Yet he'd begun as an electrical engineering major, with dreams of working on mobile communications, and was initially hired by NSA to work on power distribution logistics. Pardee didn't have any training on cyberattacks or defense. What he did have was a strong set of critical thinking, logic, and problem-solving skills – a highly translatable skillset that was further honed by his NSA work. The agency trained him on the rest. "Looking back on it, I got a lot of interesting classes and experiences there to learn about security from the other side first. Everything was taught through an attacker's lens," he says. "Now, as I've continued my career, I see how valuable that is.” Many IT professionals, he explains, will begin their careers learning about the right way to do things.


Here's What Hackers Don't Want You to Know


It's not enough to just set up a segmented network and forget about it. Security isn't a set-it-and-forget-it proposition. It requires constant monitoring, scrutiny, and support. Your CSO has to inspect the logs every day to ensure everyone who has gained access to the network is supposed to be there. Your CSO has to ensure that everyone who has access to the network only has access to what they need and nothing more. Your CSO has to ensure that people are changing their passwords on a regular basis, not using those passwords anywhere else, and using passwords with the proper amount of complexity. This, of course, means that your summer intern can't serve as your company's CSO. Neither can Bob in the accounts receivable department. You have to have someone whose dedicated job is to maintain the security in your network. If you have a small-to-medium-size business and you can't afford this, hiring a third party to manage this for you is probably going to be your best option.


Serverless Security Threats Loom as Enterprises Go Cloud Native

Serverless Security Threats Loom as Enterprises Go Cloud Native
As companies start using new cloud-native technologies including serverless functions, they also need to update their understanding of security threats and how to implement the right security controls. The study found that API-related vulnerabilities are the top threat concern (63% of respondents) when it comes to serverless usage within organizations. An example of this threat is attackers misusing privileged accounts to execute serverless functions. “So even though we are talking about something new,” Cahill said, referring to serverless, “the attack vectors and methods are old methods applied to a new technology. So we should always be thinking about how privileged accounts are being used. We want to make sure we implement a least-privilege model” to restrict access for accounts to only the resources required to perform routine, legitimate activities. Another example, he said is fuzzing, “which is basically putting in parameters at the end of an API call as a way to take over the API call.”


Intel proposes new SAPM memory type to protect against Spectre-like attacks

cpu processor
Researchers say their "proposal provides more flexibility to software" by moving most of the mechanism that prevents speculative execution attacks at the hardware level. The idea is that most speculative execution side-channel attacks can be split into two parts: the "frontend" part of the exploit code, and its "backend." Intel STORM researchers say the second part (backend) of most speculative execution attacks performs the same actions. SAPM was designed to introduce hardware-based protections against the backend part of most attacks. It's because of this concept that Intel's research team believes that SAPM will also future-proof the next generations of Intel CPUs against other -- currently undiscovered -- speculative execution attacks. But the idea of introducing new mitigations will always raise questions about reducing CPU performance. Intel STORM researchers don't deny that there's a performance hit; however, this impact is low and could be mitigated further by dropping other existing protections.


Automation with intelligence


Organisations believe they can transform their business processes, achieving higher speed and accuracy by automating decisions on the basis of structured and unstructured inputs. They expect an average payback period of 15 months – and in the scaling phase just nine months. Process fragmentation – the way in which processes are managed in a wide range of methods – is seen by 36 per cent of survey respondents as the main barrier to the adoption of intelligent automation. IT readiness is considered the main barrier by 17 per cent of organisations. ... almost two-thirds of organisations have not considered what proportion of their workforce needs to retrain as a result of automation. Even organisations that have automated at scale (51+ automations) are not yet thinking about this, with 53 per cent stating that they have not yet explored whether their workforce needs to reskill as a result of their automation strategy. Reskilling based on how the human workforce will interact with machines, including changes to role definitions, should be baked into organisations’ plans for intelligent automation adoption in order to leverage the expected capacity enhancement.


Is Swarm AI the answer to fears over AI and jobs?

Is Swarm AI the answer to fears over AI and jobs? image
Swarm AI is a technology developed by Unanimous AI. A previous study, conducted at Stanford University School of Medicine, looked at groups of radiologists using Swarm AI to collaboratively diagnose chest x-rays. Published results showed a 33% reduction in diagnostic errors when using Swarm AI. Compare this finding with the results of another study showing AI can match humans in disease diagnosis. It seems that AI is powerful, but in combination with humans, more so. But add to the mix, AI being used to help humans more effectively collaborate — and the end result could be formidable indeed. In another recent study, business teams were tested on a standard IQ test using Swarm AI and were shown to increase their effective IQ by 14 points. The latest study looking at Swarm AI, this time produced in conjunction with the California Polytechnic State University, found “AI technology modelled on biological swarms could be used to accurately predict which business teams would be high performing based on the personality of the individual members.”



Developed together with industry partners, Teo said the OT cyber security masterplan will guide the development of capabilities to secure systems in an OT environment and mitigate emerging threats to those systems. He added that the masterplan has outlined plans to train more OT cyber security professionals with advanced cyber security skills, and to establish an OT cyber security information sharing and analysis centre with the Global Resilience Federation (GRF). Managed by the Asia-Pacific business unit of GRF, the centre will serve as a threat information sharing hub for companies in energy, water and other CII sectors in Singapore. “Singapore offers a strong economy, a highly educated workforce, a central location, and an environment friendly to trade and investment,” said Mark Orsi, president of GRF.



Quote for the day:


"Integrity is the soul of leadership! Trust is the engine of leadership!" -- Amine A. Ayad


Daily Tech Digest - October 01, 2019

The dark web's latest offering: Disinformation as a service


The campaigns followed similar strategies to nation-state-backed disinformation campaigns, using newly created and long-established accounts on 'major social media platforms' to help spread information. In some cases, what appeared to be real users were replying to the accounts of the companies. But it isn't just by exploiting social media that those selling disinformation services on the dark web go about their business: they'll create their own articles and blogs to help push the agenda they've been provided with. ... Researchers say an article ended up being published as news on two media sources, illustrating the ease at which the information can spread. The other user also offered edits based on feedback before setting about sharing the disinformation using social media accounts, including older, more established accounts – which then had their message amplified by bots and sock-puppet accounts.  Some of these accounts even went so far as to communicate with or attempt to befriend users in the targeted countries to make the campaigns more effective by encouraging real people to share the disinformation.



Phish Uses Google’s URL Decoding to Swim Past Defenses

A phishing campaign that takes advantage of Google’s ability to decode non-ASCII URL data on the fly is making the rounds – looking to fool the unsavvy by effectively hiding the website address of the campaign’s phishing page. The campaign makes use of what’s called percentage-based URL encoding – a basic URL-encoding technique in which normal ASCII characters (i.e., “abc” and “123”) are converted into a string that starts with “%” and is followed by two hexadecimal digits. When resolving such an address, Google will convert this non-ASCII format into a string that is universally accepted and understood by all web browsers and servers, on the fly. The cybercrooks are making use of this in order to trick secure email gateways (SEG) into delivering their phishing emails, by hiding the true destination of the messages’ embedded malicious links. That’s according to the Cofense Phishing Defense Center, which last week observed a specific campaign using the method.


Former Army Contractor Gets Prison Term for Insider Attack

Former Army Contractor Gets Prison Term for Insider Attack
Barrence Anthony, 40, of Waldorf, Maryland, pleaded guilty in May to a single count of unlawfully accessing a protected computer. On Friday, a federal judge in Virginia sentenced the former systems engineer to two years in prison as well as ordering him to pay $50,000 in restitution, according to court documents. For several years, Anthony worked as an engineer for Federated IT, a federal contractor that provides technology and support services for a number of different military and federal government agencies, according to the Justice Department. In this case, Federated IT built and maintained financial applications on Microsoft SharePoint instances the U.S. Army's Chaplain Corps Religious Support System, which is based in the Pentagon and provides religious services and support for soldiers, according to court documents. These instances were hosted on Amazon Web Services cloud infrastructure, the documents show. Federated IT also provided IT support services for about 9,000 people working for the Army's Chaplain Corps, documents show.


8 uses for RPA in HR operations


RPA is an ideal way to review data change requests from a ticketing system and make appropriate changes in the HRIS, which can then route through the appropriate channels for approval. The software can check changes against compliance and organizational rules to ensure they are eligible and accurate; rejected changes are sent back to be updated or routed to HR for further review. RPA can also automate data sharing between systems. Some processes require data to be uploaded into one system from another system. For example, think performance ratings for compensation planning or compensation history for variable pay processing. RPA software can automate the extraction of this data from the source system, transform it into the target system format and then upload the data into the target system. RPA software can analyze data sets -- either directly in the system or by downloading or extracting data -- and provide audited results to HR for review and eventually correction, if required.


Why quantum needs a classic approach for supremacy


Generally speaking, a quantum computer does not offer the precision of a classical computer architecture, which relies on binary, 0 and 1, yes and no decisions. Stefan Woerner, global leader for quantum finance and optimisation at IBM, said: “Classical computers use binary optimisation and make many yes/no decisions that have to be correlated. Whenever you add a binary variable to the problem, you double the number of checks.” In practice, this means that when attempting to solve a problem that has several variables, the computations needed to run these correlations grows exponentially. However, Woerner added: “Some problems can be formulated in a way similar to quantum chemistry.” This is the domain of quantum computing and, for companies like IBM, it can be applied in areas such as quantum mechanics, genomics, supply chain optimisation and financial risk models.


Programming before Java
For the last 20 years, Java has become the most popular object-oriented language. It conquered the enterprise world and still has one of the biggest communities. Now industrial development exists only because of the object-oriented paradigm (OOP). But here, I want to bring up skepticism about its fundamental paradigm. First, let's return to the past when Java didn't exist. ... Unfortunately, the majority of enterprise projects become unsupportable quite rapidly. As a consequence, many enterprise projects are facing a constant migration process and unacceptable timelines. And sometimes, bug-fix estimation takes more time than simply re-writing from scratch. In addition, the word "legacy" scares IT employees more than ever before. According to my experience (more than 30 enterprise projects), the key problem is the project's architecture, which looks like a mess of patterns than anything else. Often, many patterns are used inappropriately or without any purpose, and this is all so that they can follow "modern OOP trends," sometimes referred to as the OOP cargo cult.


How to become an Alexa developer: A cheat sheet


Any developers or businesses that want to build out and utilize intelligent, voice-powered services will be affected by advances and changes that are being driven by Amazon Alexa. Alexa is built using artificial intelligence (AI) technologies, but Sobolewski said that would-be developers don't need a background in natural language understanding or speech recognition to get started. Additionally, there are beginner tutorials available as well, so even very junior software engineers can start working with the platform. Non-developers can build their own simple skills using predetermined frameworks called Alexa Skill Blueprints, which were revealed in April 2018. Amazon also offers Alexa SDKs for Node.js, Java, and Python, as well as an ASK Toolkit for Visual Studio Code, making it easy for developers to build Alexa skills using familiar languages and IDEs. Alexa is not confined to home and consumer use cases. Alexa for Business provides functionality for professional/productivity use cases, and Alexa for Hospitality provides the Alexa experience in hotels for controlling in-room devices, playing music, and contacting the hotel for guest services, among other features.


Enterprise Guide to Multi-Cloud Adoption

Image: Rawpixel.com- stock.adobe.com
Multi-cloud may appeal to organizations that want as many choices as possible to exploit the cloud. Using multiple cloud providers offers core advantages, which in the past worked as disadvantages. Now, enterprises can avoid vendor lock-in, they can pick and choose/mix or match strengths of cloud providers to their specific needs, they see more reliability and less down time by spreading their bets, and they can uphold stronger data governance and security. But it’s been a painful process to get there. All this doesn’t mean enterprises are giving up on private or hybrid cloud, which, by the way hybrid cloud is often used synonymously with multi-cloud by some people. Multi-cloud is a subset of hybrid cloud. Companies are struggling to get the most value out of cloud in general, and multi-cloud may be the answer for some. In a recent column for InformationWeek, Kishore Durg, a senior managing director of Accenture Cloud, wrote that “when it comes to realizing the value of cloud … ”


The 7 Biggest Technology Trends In 2020 Everyone Must Get Ready For Now

The 7 Biggest Technology Trends In 2020 Everyone Must Get Ready For Now
Technology is currently transforming healthcare at an unprecedented rate. Our ability to capture data from wearable devices such as smartwatches will give us the ability to increasingly predict and treat health issues in people even before they experience any symptoms. When it comes to treatment, we will see much more personalized approaches. This is also referred to as precision medicine which allows doctors to more precisely prescribe medicines and apply treatments, thanks to a data-driven understanding of how effective they are likely to be for a specific patient. Although not a new idea, thanks to recent breakthroughs in technology, especially in the fields of genomics and AI, it is giving us a greater understanding of how different people’s bodies are better or worse equipped to fight off specific diseases, as well as how they are likely to react to different types of medication or treatment. Throughout 2020 we will see new applications of predictive healthcare and the introduction of more personalized and effective treatments to ensure better outcomes for individual patients.


Interview: James Smith, director of digital, Nationwide Building Society

As part of its digital journey, the company needs access to a wide talent base. To this end, the building society will open its first major technology hub in London next year to give it access to the IT professionals it needs to continue its digital journey. The new digital innovation hub will add 750 tech jobs, and the building society is also expanding operations in its home town of Swindon, which currently houses all its 3,500 technology operations staff. In total, Smith manages about 1,500 staff in the digital team. There are about 5,000 IT staff at Nationwide in total. Its people are organised around the work, with squads aligned to particular domains, using agile principles and focusing on digital, says Smith. ... Part of this additional investment will see Nationwide use AI technology and big data to help it understand customers so it can provide additional services, such as money management. This will involve working with financial technology (fintech) suppliers.



Quote for the day:


"Leaders speak truth into people who believe lies about themselves." -- Orrin Woodward


Daily Tech Digest - September 29, 2019

AI used for first time in job interviews in UK to find best applicants

TELEMMGLPICT000211074639.jpeg
Candidates are ranked on a scale of one to 100 against the database of traits of previous “successful” candidates, with the process taking days rather than weeks or months, says the company. It claims one firm had a 15 per cent uplift in sales. “I would much prefer having my first screening with an algorithm that treats me fairly rather than one that depends on how tired the recruiter is that day,” said Mr Larsen. Griff Ferris, Legal and Policy Officer for Big Brother Watch, said: "Using a faceless artificial intelligence system to conduct tens of thousands of interviews has really chilling implications for jobseekers. "This algorithm will be attempting to recognise and measure the extreme complexities of human speech, body language and expression, which will inevitably have a detrimental effect on unconventional applicants. "As with many of these systems, unless the algorithm has been trained on an extremely diverse dataset there's a very high likelihood that it may be biased in some way, resulting in candidates from certain backgrounds being unfairly excluded and discriminated against."



Traditional banks are struggling to stave off the fintech revolution

Traditional banks are struggling to stave off the fintech revolution
The other blind spot for legacy banks is their tendency to have a narrow and misguided understanding of disruptive business models. This usually begins with treating a new species of competitors as traditional ones. For example Cathy Bessant, Bank of America's CTO, commented on Apple's announcement of a new credit card: "My reaction when I saw the announcement was, first competitively, all of the features that are in that card are offerings we have today." The propensity to see only the product or service and not the entire business model is common among incumbents across a range of industries. Kodak, Blockbuster and Nokia were only three of the hundreds of disrupted incumbents which were able to see only the product (and associated features) that threatened them and not how the business models of their competitors allowed the creation of entirely new ecosystems that they were poorly equipped to survive in. By stooping down to competing on a feature by feature basis, incumbents lose the chance to redefine an industry that they once dominated.


Arizona getting help developing cybersecurity professionals


From the global to the local, cybersecurity breaches affect us in nearly every aspect of our lives. Hackers don’t discriminate. They attack small businesses and multinational corporations, federal agencies and local school districts, the young and old, the rich and poor. Many people have called the internet the modern battlefield and cybersecurity professionals the warriors of the digital age. Getting better at protecting ourselves, our businesses, our citizens and our communities against cyber threats will be one of the defining challenges of the next decade — and something we absolutely have to get right. The chief reason cyber attacks are increasing in number, scope, sophistication and damage is it is really hard to get ahead of the hackers. Cybersecurity in 2019 and beyond requires a very different approach than we’re used to. And that requires a very different kind of cybersecurity professional. The problem is there are far more job openings in cybersecurity than qualified candidates to fill them.


Venture Capital 2.0: This Time It's Different?

We’re starting to see some rationality about this creeping in around the edges. Take Uber, whose theory of success (at least for now) is that it will dominate local markets for both drivers and riders eventually. If you believe that, then it’s worth subsidizing both sides with venture money. Uber may well be Exhibit A of the mythical first-mover advantage illusion. In just three months, Uber lost over $5 billion. The real problem here is one that we’ve seen before—to seed a market, a startup subsidizes early customers. The theory is that once you have them in the door, you can eventually create pricing power and raise prices. Eventually, unless you have some other revenue stream like darkly trading in people’s personal information, you have to charge enough to cover the cost of the service and make a profit. Once those $7 Uber rides start costing $30, riders will be back in their own cars or on the bus.  Another “what were they thinking?” example? E-cigarette maker Juul.


The CIO’s role in driving agile transformation

tunnel highway driving car roadmap
Some CIOs channel solutions to what their internal teams are skilled and have the technologies to implement on their own. Others look to outsource more and seek partners or system integrators to oversee implementation. And some CIOs gripe when business leaders have already selected partners or when the CIO is asked to assist or bail out shadow IT. None of these are optimal, and innovative solutions delivered faster and with higher quality more often requires a blend of internal resources, partners, reuse of existing platforms, and experimentation with new technologies.  CIOs should partner with their business leaders on developing an ecosystem of partners and technologies that drive current and future needs. This is not a procurement process nor is it a vendor due-diligence process as both of these assume requirements are known and one ore more vendors already in consideration. This is an exploration, and innovative, digitally minded CIOs are best equipped to define and manage this journey.


HPE Extends Its Cybersecurity Capabilities And Earns Two Cyber Catalyst Designations

Understanding that no cyber resilience solution is complete without the capability to recover from a cyber incident, HPE followed up its delivery of Silicon Root of Trust with its Server System Restore capability, built into iLO 5 amplifier pack. This capability enables organizations to restore servers to its original operating environment. MI&S detailed these capabilities here. HPE continues to deliver on its cyber resilience with two new features that further put the company in a leadership position. One of the newer features that hasn’t been covered too much is called One-Button Secure Erase. This feature is exactly what it implies - the ability to completely erase every byte of data that sits on an HPE server when an IT Department decides to end-of-life infrastructure. When that old server is ready to be recycled or donated, IT organizations can have confidence there will be no traces of data or proprietary information. This is an invaluable feature for organizations of all sizes.


Chatbot: The intelligent banking assistant

chatbot platform, chatbot interface
With chatbots gaining more traction, many firms across the globe have started offering off-the-shelf products that help developers to build, test, host and deploy these programs using Artificial Intelligence Markup Language (AIML), an open source specification for creating chatbots3. A few platforms support integration with payment providers for seamless processing of customer payments based on a customer’s interaction with the bot. Increasingly, chatbots are also attracting interest in the world of FinTech, and a number of companies have developed their own chatbots using proprietary technology and algorithms. Chatbots utilise application programming interfaces (APIs) to integrate with data management platforms. This allows them to analyse the extracted data as well as web- and mobile-based user interfaces and deliver the necessary insights to the end customer. ... In their current form, chatbots have reached a certain level of maturity.


CIOs Should Be Asking Questions In The Boardroom, Not Just Answering Them.

In the boardroom
“A company with a clear vision of the future is more likely to win by either setting the rules of the game or being quick to take advantage of an unfolding new industry landscape defined by other players.” The CIO can catalyze a board to “look for gaps; reframe closed mindsets; provide external perspective; and point to potentially better options or directions. “Executive teams, no matter how effective at current operations, can often become myopic. A (CIO’s) big, well-aimed, simple question can disrupt such complacency,” he says. But, before this can even begin to happen, there remains the non-trivial matter of achieving board appointment for a technologist in the first place.  CIO or CTO NED board appointment is a needle that is hard to move in a boardroom culture dominated by finance and general management. To move it, Gartner’s formula is to invite board candidates with technology backgrounds to a series of dinners, also attended by major recruiting firms and board chairmen.


Dear network operators, please use the existing tools to fix security


It's tempting to point the finger at network operators for failing to deploy RPKI. But another finger needs to be pointed at the software vendors for providing shoddy documentation. Routing security isn't the only system where deploying existing tools can make a big difference. Huston said in 2017 that failing to secure the DNS with DNSSEC is savage ignorance. Network operators should get onto that before fingers are pointed at them. Network operators should also avoid being the recipient of pointing fingers by deploying DMARC message authentication to prevent spammers from spoofing their domains for email. The UK's National Cyber Security Centre (NCSC) has used DMARC to significantly reduce that risk for government domains. "That's how you stop people clicking on the link, because they never get the crap in the first place. Simple things done at scale can have a difference," said Dr Ian Levy, the NCSC's technical director in 2018. The Australian government has also been deploying DMARC on its domains, though its efforts have lagged behind the UK.


Postgres Handles More Than You Think


Thinking about scaling beyond your Postgres cluster and adding another data store like Redis or Elasticsearch? Before adopting a more complex infrastructure, take a minute and think again. It’s quite possible to get more out of an existing Postgres database. It can scale for heavy loads and offers powerful features which are not obvious at first sight. For example, its possible to enable in-memory caching, text search, specialized indexing, and key-value storage. ... Postgres provides a powerful server-side function environment in multiple programming languages. Try to pre-process as much data as you can on the Postgres server with server-side functions. That way, you can cut down on the latency that comes from passing too much data back and forth between your application servers and your database. This approach is particularly useful for large aggregations and joins. What’s even better is your development team can use its existing skill set for writing Postgres code. Other than the default PL/pgSQL (Postgres’ native procedural language), Postgres functions and triggers can be written in PL/Python, PL/Perl, PL/V8 (JavaScript extension for Postgres) and PL/R.



Quote for the day:


"Give whatever you are doing and whoever you are with the gift of your attention." -- Jim Rohn