Daily Tech Digest - June 18, 2019

7 Promises of Machine Learning in Finance


Machine Learning is the new black, or the new oil, or the new gold! Whatever you compare Machine Learning to, it’s probably true from a conceptual value perspective. But what about its relation to finance, what situation we are having today? Banks keep everything: a history of transactions, conversations with clients, internal information and more… Storages are literally bloated to tera-, and somewhere to a petabyte. Now, then, Big Data can solve this problem and process huge amounts of information: the greater the amount, the greater the detectable needs and behaviors of the client. Artificial Intelligence paired with Machine Learning allows the software learning the clients’ behavior and make autonomous decisions. ... It goes without saying, Machine learning is just remarkably good for finance and the promises of this technology including Big Data and Artificial Intelligence is super high. As you can see, there are lots of options, approaches, and applications for improving: choosing an optimal location for a bank, finding the best solutions for a customer, turning algorithmic trading into intelligent trading, managing risk and preventing fraud ...


What Makes A Software Programmer a Professional?

Professional software developers understand that they generally have more knowledge of software development than the customers that have hired them to write code. Thus they understand that writing secure code, code that can’t be easily abused by hackers, is their responsibility. A software developer creating web applications probably needs to address more security risks than a developer writing embedded drivers for an IOT device, but each needs to assess the different ways the software is vulnerable to abuse and take steps to eliminate or mitigate those risks. Although it may be impossible to guarantee that any software is immune to an attack, professional developers will take the time to learn and understand the vulnerabilities that could exist in their software, and then take the subsequent steps to reduce the risk that their software is vulnerable. Protecting your software from security risks usually includes both static analysis tools and processes to reduce the introduction of security errors, but it primarily relies upon educating those writing the code.


When serverless is a bad idea
Indeed, many refer to serverless as “no-ops,” but it’s really “reduced-ops,” or as my friend Mike Kavis likes to say, “some-ops.” Clearly the objective is to increase simplicity and make building and deploying net-new cloud-based serverless applications much more productive and agile. But serverless is not always a good idea. Indeed, it seems to be a forced fit a good deal of the time, causing more error than trial. Serverless is an especially bad idea when it comes to stateful applications. A stateless application means that every transaction is performed as if it were being done for the very first time. There is no previously stored information used for the current transaction. In contrast, a stateful application saves client data from the activities of one session for use in another. The data that is saved is often called the application’s state. Stateful applications are a bad fit for serverless.  Why? Serverless applications are made up of sets of services (such as functions) that are short running and stateless. Consider them transactions in the traditional sense, in that they are invoked and they don’t maintain anything after being executed.



Two Weekend Outages, Neither a Cyberattack

The power outages in Argentina occurred just prior to local elections being held in many parts of the country. Suspicious, right? But not as suspicious as what experts say is the country's aging power infrastructure. Indeed, Argentina's Clarín newspaper reports that the Argentinian government has blamed the "gigantic" outage on the country's electric power interconnection systems collapsing due to coastal storms. Officials also noted that the rolling blackouts had taken out not just Argentina, but large parts of Uruguay. Apparently, that's also what took out parts of Paraguay and Chile. ... Target suffered twin outages, neither of which trace to a hack attack. On Saturday, customers were unable to buy any goods in stores or online as a result of an outage caused by what Target blamed on "an internal technology issue" that lasted about two hours. "Our technology team worked quickly to identify and fix the issue, and we apologize for the inconvenience and frustration this caused for our guests," a Target spokesman said. 


AI storage: Machine learning, deep learning and storage needs


The storage and I/O requirements of AI are not the same throughout its lifecycle. Conventional AI systems need training, and during that phase they will be more I/O-intensive, which is where they can make use of flash and NVMe. The “inference” stage will rely more on compute resources, however. Deep learning systems, with their ability to retrain themselves as they work, need constant access to data. “When some organisations talk about storage for machine learning/deep learning, they often just mean the training of models, which requires very high bandwidth to keep the GPUs busy,” says Doug O'Flaherty, a director at IBM Storage. “However, the real productivity gains for a data science team are in managing the entire AI data pipeline from ingest to inference.” The outputs of an AI program, for their part, are often small enough that they are no issue for modern enterprise IT systems. This suggests that AI systems need tiers of storage and, in that respect, they are not dissimilar to traditional business analytics or even enterprise resource planning (ERP) and database systems.



Can Your Patching Strategy Keep Up with the Demands of Open Source?

An alarming number of companies aren't applying patches in a timely fashion (for both proprietary and open source software), opening themselves to risk. The reasons for not patching are varied: Some organizations are overwhelmed by the endless stream of available patches and are unable to prioritize what needs to be patched, some lack the trained resources to apply patches, and some need to balance risk with the financial costs of addressing that risk. Unpatched software vulnerabilities are one of the biggest cyberthreats that organizations face, and unpatched open source components in software add to security risk. The 2019 OSSRA report notes that 60% of the codebases audited in 2018 contained at least one open source vulnerability. In 2018, the NVD added over 16,500 new vulnerabilities. This represents a rate of over 45 new disclosures daily, or a pace most organizations are ill equipped to handle. Given open source components are consumed both in source form as well as from commercial applications, a comprehensive open source governance strategy should encompass both source code usage as well as the governance practices for any software or service provider.


3 rules for succeeding with AI and IoT at the edge


The primary value of combining AI, IoT and edge computing is their ability to generate fast, appropriate responses to events signaled by IoT sensors. Virtual and augmented reality applications demand this kind of response, as do enterprise applications in process control and the movement of goods. The cooperation inherent in manufacturing, warehousing, sales and delivery will likely create the sweet spot for an IoT-enabled AI edge. Such activities form a chain of movement of goods that cross many different companies and demand coordination that a single-company IoT model could not provide. ... Think event-flows, not workflows in your application planning. Most enterprise development practices were weaned on transaction processing, and transactions are multistep, contextual, update-centric forms of work. Their pace of generation can be predicted fairly well, and when a transaction is initiated, the flow of information it triggers is usually predictable. Events are simply signals of conditions or changes in conditions.


Building a cyber-physical immune system

To build a credible model of its own behaviour, the system must not just learn its digital behaviour, but also capture the behaviour of its physical subsystems. One way to achieve this is to represent the behaviour in terms of physical laws. For example, moving parts of a system will obey the laws of motion; parts of a heating subsystem will obey the laws of thermodynamics; and electrical installations will obey current and voltage laws. In theory, it is possible to sense relevant physical quantities, apply the correct physical laws and then detect departures from expected behaviour. These deviations suggest that the system might be functioning abnormally, because of to its own wear and tear, spontaneous failure, or concerted malicious activity. Anomaly detection, in principle, operates in this manner, but it has been applied rather narrowly to specific subsystems. ... to build a cyber-physical immune system, it is necessary to engage with experts who work on its non-cyber aspects.


Many businesses are investing in microservices, for example, to enable faster, more efficient application development. But whereas in traditional models, applications are deployed to application servers, in a microservices-based architecture, servers are deployed to the application. One consequence is that tasks previously handled by the application server—such as authentication, authorization, and session management—are shifted to each microservice. If a business has thousands of such microservices powering their applications across multiple clouds, how can its IT leaders even begin to think of a perimeter? ... Historically, many enterprises applied management and security to only a subset of APIs—e.g., those shared with internal partners and hosted behind the corporate firewall (within a walled garden, for example). But because network perimeters no longer contain all the experiences that drive business, enterprises should think of each API as a possible point of business leverage and a possible point of vulnerability. To adapt to today’s application development demands and threat environment, in other words, APIs should be managed and secured, regardless of where they are located.


Love It or Hate It, Java Continues to Evolve

What’s really important is that Java is continuing to evolve. With the new six-month release cadence of the OpenJDK, it might seem like the pace of change has slowed but, if anything, the reverse is true. We are seeing a constant stream of new features, many of them quite small, yet making developers lives much easier. To add big new features to Java takes time because it’s essential to get these things right. We will see in JDK 13 a change to the switch expression, which was introduced as a preview feature in JDK 12. Rather than setting the syntax in stone (via the Java SE specification), preview features allow developers to try a feature and provide feedback before it is finalized. That’s precisely what happened in this case. The longer-term Project Amber will continue to make well-reasoned changes to the language syntax to smooth some of the rough-edges that developers find trying at times. You can expect to see more parts of Amber delivered over the next few releases.



Quote for the day:


"You must expect great things of yourself before you can do them." -- Michael Jordan


No comments:

Post a Comment