Daily Tech Digest - July 07, 2017

Algorithmic Decomposition Versus Object-Oriented Decomposition

The principal advantage of object-oriented decomposition is that it encourages the reuse of software. This results in flexible software systems that can evolve as system requirements change. It allows a programmer to use object-oriented programming languages effectively. Object-oriented decomposition is also more intuitive than algorithm-oriented decomposition because objects naturally model entities in the application domain. Object-oriented design is a design strategy where system designers think in terms of ‘things’ instead of operations or functions. The executing system is made up of interacting objects that maintain their own local state and provide operations on that state information. They hide information about the representation of the state and hence limit access to it. An object-oriented design process involves

Virtual Panel: High Performance Application in .NET

Microsoft has made quite a few investments in platform performance. To cite some examples: .NET Native was introduced a few years ago to improve startup times and reduce memory usage for client apps; .NET 4.5 and 4.6 saw important improvements to the scalability of the garbage collector; .NET 4.6 had a revamped JIT compiler with support for vectorization; C# 7 introduced ref locals and ref returns, which are features designed to allow for better performance on the language level. All in all, it would probably be easier for me personally to write a small high-performance application in a lower-level language like C or C++. However, introducing unmanaged code into an existing system, or developing a large codebase with these languages, is not a decision to be taken lightly. It makes sense to bet on .NET for certain kinds of high-performance systems, as long as you are aware of the challenges and have the tools to solve them on the code level

Medical devices at risk: 5 capabilities that invite danger

Chris Camejo, director of product management, threat intelligence at NTT Security, noted that most medical devices in use today would be secure, “only in a closed, trusted environment without any potentially malicious activity." “Unfortunately a hospital network can't be considered trusted, as it is connected to the internet and contains thousands of internal users, any one of whom could click on the wrong link or download the wrong attachment,” he said. Still, debate continues about how imminent is the risk of physical harm. Jay Radcliffe, a medical device security expert and Type-One diabetic, famously said at the 2014 Black Hat conference that it would be far more likely for, “an attacker to sneak up behind me and deliver a fatal blow to my head with a baseball bat,” than to be harmed by a cyber attack.

Artificial Stupidity: Learning To Trust Artificial Intelligence (Sometimes)

While deep learning AI can surprise its human users with flashes of brilliance — or stupidity — deterministic software always produces the same output from a given input. “Machine learning cannot be verified and certified,” Cherepinsky said. “Some algorithms we chose not to use… even though they work on the surface, they’re not certifiable, verifiable, and testable.” Sikorsky has used some deep learning algorithms in its flying laboratory, Cherepinsky said, and he’s far from giving up on the technology, but he doesn’t think it’s ready for real world use: “The current state of the art (is) they’re not explainable yet.” ... “You see in artificial intelligence an increasing trend towards lifelike agents and a demand for those agents, like Siri, Cortana, and Alexa, to be more emotionally responsive, to be more nuanced in ways that are human-like,” David Hanson, CEO of Hong Kong-based Hanson Robotics, told the Johns Hopkins conference.

Experts Predict When Artificial Intelligence Will Exceed Human Performance

The experts predict that AI will outperform humans in the next 10 years in tasks such as translating languages (by 2024), writing high school essays (by 2026), and driving trucks (by 2027). But many other tasks will take much longer for machines to master. AI won’t be better than humans at working in retail until 2031, able to write a bestselling book until 2049, or capable of working as a surgeon until 2053. The experts are far from infallible. They predicted that AI would be better than humans at Go by about 2027. (This was in 2015, remember.) In fact, Google’s DeepMind subsidiary has already developed an artificial intelligence capable of beating the best humans. That took two years rather than 12. It’s easy to think that this gives the lie to these predictions.

Major cyber incidents accelerating, says NCSC

“This increase in major attacks is mainly being driven by the fact that cyber attack tools are becoming more readily available, in combination with a growing willingness to use them,” he told The Cyber Security Summit in London. Although the WannaCryransomware attacks in May 2017 came very close, Noble said there had been no C1-level national cyber security incidents to date. The majority of the major incidents the NCSC has dealt with were C3-level attacks, typically confined to single organisations. These account for 451 incidents to date. The remaining 29 major incidents were C2-level attacks, significant attacks that typically require a cross-government response. Across these nearly 500 incidents, Noble said there were five common themes or lessons to be learned.

Learning To Wear Your Intelligence: How To Apply AI In Wearables and IoT

Kurzweil claims that machines will pass the Turing AI test by 2029, and that around 2045, “the pace of change will be so astonishingly quick that we won’t be able to keep up, unless we enhance our own intelligence by merging with the intelligent machines we are creating”. He further claims that humans will be a hybrid of biological and non-biological intelligence that becomes increasingly dominated by its non-biological component. Kurzweil envisions nanobots inside our bodies that fight against infections and cancer, replace organs, and improve memory and cognitive abilities. ... The artificial general intelligence (AGI) or strong AI community, though varying widely in timeframe to reach singularity, are in consensus that it’s plausible, with most mainstream AI researchers doubting that progress will be rapid.

A Tour of Recurrent Neural Network Algorithms for Deep Learning

Recurrent neural networks, or RNNs, are a type of artificial neural network that add additional weights to the network to create cycles in the network graph in an effort to maintain an internal state. The promise of adding state to neural networks is that they will be able to explicitly learn and exploit context in sequence prediction problems, such as problems with an order or temporal component. After reading this post, you will know: How top recurrent neural networks used for deep learning work, such as LSTMs, GRUs, and NTMs; How top RNNs relate to the broader study of recurrence in artificial neural networks; and How research in RNNs has led to state-of-the-art performance on a range of challenging problems.

Machine Learning, Artificial Intelligence - And The Future Of Accounting

When accounting software companies eliminated desktop support in favor of cloud-based services, accounting firms were forced to adapt to life in the cloud. Similarly, accounting departments and firms will be forced to adopt machine learning to remain competitive since machines can deliver real-time insights, enhance decision making and catapult efficiency. Rather than eliminate the human workforce in accounting firms, the humans will have new colleagues—machines—who will pair with them to provide more efficient and effective services to clients. Currently, there is no machine replacement for the emotional intelligence requirements of accounting work, but machines can learn to perform redundant, repeatable and oftentimes extremely time-consuming tasks.

Model-Based Software Engineering to Tame the IoT Jungle

The ThingML approach's first goal is to allow abstracting from the heterogeneous platforms and IoT devices to model the desired IoT system's architecture. In practice, platforms and devices, as well as the final distribution of software components, typically aren't known during the early design phases. The architecture model consists of components, ports, connectors, and asynchronous messages. Once the general architecture is defined, our approach allows for specification of the components' business logic in a platform-independent way using statecharts and the action language. ThingML statecharts include composites, parallel regions, and history states. The state machines typically react to events corresponding to incoming messages on a component's port.

Quote for the day:

"Integrity is the soul of leadership! Trust is the engine of leadership! " -- Amine A. Ayad

No comments:

Post a Comment