Daily Tech Digest - July 12, 2019

Reinforcement learning is an area of machine learning that has received lots of attention from researchers over the past decade. Benaich and Hogarth define it as being concerned with "software agents that learn goal-oriented behavior by trial and error in an environment that provides rewards or penalties in response to the agent's actions (called a "policy") towards achieving that goal." A good chunk of the progress made in RL has to do with training AI to play games, equaling or surpassing human performance. StarCraft II, Quake III Arena and Montezuma's revenge are just some of those games. More important than the sensationalist aspect of "AI beats humans", however, are the methods through which RL may reach such outcomes: Play driven learning, simulation and real-world combination, and curiosity-driven exploration. Can we train AI by playing games? As children, we acquire complex skills and behaviors by learning and practicing diverse strategies and behaviors in a low-risk fashion, i.e., play time. 


APT Groups Make Quadruple What They Spend on Attack Tools

"The potential benefit from an attack far exceeds the cost of a starter kit, says Leigh-Anne Galloway, cybersecurity resilience lead at Positive Technologies. For groups like Silence, the profit from one attack is typically more than quadruple the cost of the attack toolset, she says. The ROI for some APT groups can be many magnitudes higher. Positive Technologies, for instance, estimated that APT38, a profit-driven threat group with suspected backing from the North Korean government, spends more than $500,000 for carrying out attacks on financial institutions but gets over $41 million in return on average. A lot of the money that APT38 spends is on tools similar to those used by groups engaged in cyber espionage campaigns. Building an effective system of protection against APTs can be expensive, Galloway says. For most organizations that have experienced an APT attack, the cost of restoring infrastructure in many cases is the main item of expenditure. "It can be much more than direct financial damage from an attack," she says.


Smarter IoT concepts reveal creaking networks

Industry 4.0 - industrial IoT internet of things
"The internet, as we know it, is based on network architectures of the 70s and 80s, when it was designed for completely different applications,” the researchers say in their media release. The internet has centralized security, which causes choke points, and and an inherent lack of dynamic controls, which translates to inflexibility in access rights — all of which make it difficult to adapt the IoT to it. Device, data, and process management must be integrated into IoT systems, say the group behind the project, called DoRIoT (Dynamische Laufzeitumgebung für Organisch (dis-)Aggregierende IoT-Prozesse), translated as Dynamic Runtime Environment for Organic dis-Aggregating IoT Processes. “In order to close this gap, concepts [will be] developed in the project that transparently realize the access to the data,” says Professor Sebastian Zug of the University of Freiberg, a partner in DoRIoT. “For the application, it should make no difference whether the specific information requirement is answered by a server or an IoT node.”


Managing Third-Party Risks: CISOs' Success Strategies

Managing Third-Party Risks: CISOs' Success Strategies
As more organizations rely on third parties for various services, managing the security risks involved is becoming a bigger challenge. Those risks, indeed, can be significant. For example, earlier this year, Indian IT outsourcing giant Wipro was targeted by hackers who in turn launched phishing attacks against its customers. Among the toughest third-party risk management challenges are: Keeping track of the long list of outsourcers an organization uses and making sure they're assessed for security; Taking steps to minimize the amount of sensitive data that's shared with vendors - and making sure that data is adequately protected; and Holding vendors to a uniform standard for security. "For most organizations, there is still a long way to go in strengthening governance when it comes to vendor management," says Jagdeep Singh, CISO at InstaRem, a Singapore-based fintech company. "We need to look at the broader risk posture that vendors bring in ... which will determine the sort of due diligence you want to carry out."


To encourage an Agile enterprise architecture, software teams must devise a method to get bottom-up input and enforce consistency. Apply tenets of continuous integration and continuous delivery all the way to planning and architecture. With a dynamic roadmap, an organization can change its planning from an annual endeavor to a practically nonstop effort. Lufthansa Systems, a software and IT service provider for the airline industry under parent company Lufthansa, devised a layered approach to push customer demand into product architecture planning. Now, the company can continuously update and improve products, said George Lewe, who manages the company's roster of Atlassian tools that underpin the multi-team collaboration. "We get much more input from the customers -- really cool ideas," Lewe said. "Some requests might not fit into our product strategy or, for technical reasons, it's not possible, but we can look at all of them." Lufthansa Systems moved its support agents, product managers and software developers onto Atlassian Jira, a project tracking tool, with a tiered concept. 


What does the death of Hadoop mean for big data?

The Hadoop software framework, which facilitated distributed storage and processing of big data using the MapReduce programming model, served these data ambitions sufficiently. The modules in Hadoop were developed for computer clusters built from commodity hardware and eventually also found use on clusters of higher-end hardware. But the broader adoption of the open-source distributed storage technology that was invented by Google, however, did not come to be, as enterprises began opting to move to the cloud and explore AI, which included machine learning and deep learning as part of their big data initiative. Worse, several big Hadoop-based solution providers that had been unprofitable for years were forced to merge to minimize losses, and one may be forced shut down altogether. However, the questions remain if the fate of these vendors is only indicative of the demise of Hadoop powered solutions and other open source data platforms, or the death of big data as a whole? Was big data merely a fad or a passing interest of industries?


From Machine Learning to Machine Cognition  

Image 4 for From Machine Learning to Machine Cognition
Keeping logic/decisions outside network is what has been done by now. For decisions, we are using automated systems bases on software running on CPUs instead of artificial cognitive networks. While these work very well and will still be present for a long time, they are limited. Basically, these programs perform simple iterative tasks or move controls and numbers on monitor windows with millions of lines of code. This approach may be good while dealing with games and simple narrow tasks but not great when dealing with general concepts. They will not ensure enough internal connections. These will hardly evolve to intelligence. The complexity required is just too high to emulate imagination, intuition etc. Image recognition has been developed with neural networks because it was impossible to generate an iterative algorithm for it. The same should be done with cognition; decisions should use neurons, cognition should be kept inside network together with concepts and learning, as they have common neurons.


Open-Source Tool Lets Anyone Experiment With Cryptocurrency Blockchains

In researching blockchains, Shudo and his colleagues searched for a simulator that would help them experiment with and improve the technology. But existing simulators were too hard to use and lacked the features the team wanted. Moreover, these simulators had apparently been created for specific research and were abandoned soon after that work was completed, because many of the tools the group found were no longer being updated.  "The most recent simulator we looked at was developed in October 2016," says Shudo. "And it was no longer being maintained." So, the group developed its own simulator. Dubbed SimBlock, it runs on any personal computer supporting Java and enables users to easily change the behavior of blockchain nodes. Consequently, investigating the effects of changed node-behavior has now become a straightforward matter, says Shudo. "All the parameters of the nodes in SimBlock are written in Java," he explains. "These source files are separated from the main SimBlock Java source code, so the user simply edits [the nodes’] source code to change their behavior."


Visual Studio Code: Stepping on Visual Studio’s toes?
Microsoft describes Visual Studio as a full-featured development environment that accommodates complex workflows. Visual Studio integrates all kinds of tools in one environment, from designers, code analyzers, and debuggers to testing and deployment tools. Developers can use Visual Studio to build cloud, mobile, and desktop apps for Windows and MacOS.  Microsoft describes Visual Studio Code, on the other hand, as a streamlined code editor, with just the tools needed for a quick code-build-debug cycle. The cross-platform editor complements a developer’s existing tool chain, and is leveraged for web and cloud applications. But while Microsoft views the two tools as complementary, developers have been raising questions about redundancy for years. Responses to a query in Stack Overflow, made four years ago, sum up the differences this way: Visual Studio Code is “cross-platform,” “file oriented,” “extensible,” and “fast,” whereas Visual Studio is “full-featured,” “project and solution oriented,” “convenient,” and “not fast.”


Attacks against AI systems are a growing concern


The continuing game of “cat and mouse” between attackers and defenders will reach a whole new level when both sides are using AI, said Hypponen, and defenders will have to adapt quickly as soon as they see the first AI-enabled attacks emerging. But despite the claims of some security suppliers, Hypponen told Computer Weekly in a recent interview that no criminal groups appear to be using AI to conduct cyber attacks. The Sherpa study therefore focuses on how malicious actors can abuse AI, machine learning and smart information systems. The researchers identify a variety of potentially malicious uses for AI that are already within attackers’ reach, including the creation of sophisticated disinformation and social engineering campaigns. Although the research found no definitive proof that malicious actors are currently using AI to power cyber attacks, as indicated by Hypponen, the researchers highlighted that adversaries are already attacking and manipulating existing AI systems used by search engines, social media companies, recommendation websites, and more.



Quote for the day:


"Leadership is a matter of having people look at you and gain confidence, seeing how you react. If you're in control, they're in control." -- Tom Landry


No comments:

Post a Comment