Daily Tech Digest - October 04, 2019

The decision to make an Android smartphone more than anything else reflects Microsoft's changing priorities. The Surface brand is a now a success, generating over a billion dollars in revenue per quarter. Perhaps the brand is strong enough that there is now demand for a phone-sized Surface device, even if it doesn't run on Windows like the rest of the line. I'm not entirely convinced that's the case, but there will be some enthusiasts who will want to be Surface users, from handheld device to massive collaboration screen. The other reason for a Surface phone is the success of Microsoft's app strategy, which has basically ensured that, even if you aren't using a Windows device, you can still get access to a wide range of Windows services. Microsoft, as my colleague Mary Jo Foley points out, already has over 150 apps in the Google Play app store. Having a phone to showcase those apps makes sense and may even encourage more developers to experiment with new versions that take advantage of those dual screens. Supporting those two strategies is a higher priority than trying to make Windows smartphones happen again.


Hard Fork on Blockchain
With the introduction of blockchain technology in enterprise software development, organizations are asking for guidance on how to deliver DevOps for blockchain projects. ... Blockchain applications are often designed to handle financial transactions, track mission-critical business processes, and maintain the confidentiality of their consortium members and the customers they serve. Software faults in these areas might, in extreme cases, represent a significant risk to an organization. As a result, blockchain applications usually demand more rigorous risk management and testing strategies than traditional software applications. A popular approach is to look at smart contract design much as you’d look at microservice design: Decompose the solution into its core entities and the processes that act on those entities, then develop discrete, composable smart contracts for each entity and process so they can evolve independently over time. 



Information security leaders are certainly aware of the potential hazards of insiders and have taken steps to mitigate the risk. Some 69% of organizations that suffered a data breach due to an insider threat said they did have a prevention solution in place at the time. As a result, 78% of the information security leaders acknowledged that their prevention strategies and solutions aren't sufficient to stop insider threats, including those with traditional data loss prevention (DLP) tools. "We're seeing companies empower their employees without the proper security programs in place, leaving companies in a heightened state of risk," Jadee Hanson, CISO and vice president of information systems of Code42, said in a press release. "In addition to enforcing awareness trainings, implementing data loss protection technologies and adding data protection measures to on- and off-boarding processes, organizations should not delay in launching transparent, cross-functional insider threat programs. Insider threats are real. Failing to act will only result in increasingly catastrophic data loss and breaches."


TinyML: The challenges and opportunities of low-power ML applications


TinyML can be used anywhere it’s difficult to supply power. “Difficult” doesn’t just mean that power is unavailable; it might mean that supplying power is inconvenient. Think about a factory floor, with hundreds of machines. And think about using thousands of sensors to monitor those machines for problems (vibration, temperature, etc.) and order maintenance before a machine breaks down. You don’t want to string the factory with wires and cables to supply power to all the monitors; that would be a hazard all its own. Ideally, you would like intelligent sensors that can send wireless notifications only when needed; they might be powered by a battery, or even by generating electricity from vibration. The smart sensor might be as simple as a sticker with an embedded processor and a tiny battery. We’re at the point where we can start building that. Think about medical equipment. Several years ago, a friend of mine built custom equipment for medical research labs. Many of his devices couldn’t tolerate the noise created by a traditional power supply and had to be battery powered.


5 technical capabilities required in modern enterprise data strategies

A businessman ascends a staircase surrounded by symbols of business and business data.
While Hadoop was the early winner in big data platforms, enterprises are investing in a mix of them today including Apache Spark, Apache Hive, Snowflake, multiple databases supported on AWS, Azure and Google Cloud Platform, and many others. Using multiple big data platforms creates significant challenges for CIO because attracting data and analytics-skilled people is highly competitive and managing numerous platforms adds operational and security complexities. While many enterprises are likely to consolidate to fewer data platforms as part of their strategy, they also must consider services, tools, partnerships and training to provide better support across several data platforms. Since large enterprises are unlikely to be able to centralize data in one data warehouse or data lake, then the need to establish a data catalog becomes even more strategically important. Data catalogs help end-users search, identify and learn more about data repositories that they can use for analytics, machine learning experiments and application development.


Modernize Your C# Code - Part IV: Types

The relevance of type information inspection / access at runtime increased leading to capabilities like reflection. While classic native systems usually have very limited runtime capabilities (e.g., C++), managed systems appeared with vast possibilities (e.g., JVM or .NET). Now one of the issues with this approach today is that many types are no longer originally coming from the underlying system - they come from deserialization of some data (e.g., incoming request to a web API). While the basic validation and deserialization could be coming from a type defined in the system, usually it comes just from a derivation of such a type (e.g., omitting certain properties, adding new ones, changing types of certain properties, ...). As it stands, duplication and limitations arise when dealing with such data. Hence, the need for dynamic programming languages, which offer more flexibility in that regard - at the cost of type safety for development. Every problem has a solution and in the last 10 years, we've seen new love for the type systems and type theory appearing all over the place.


DARPA looks for new NICs to speed up networks

A complex, complicated cloud.
The FastNICs programs will select a challenge application and provide it with the hardware support it needs, operating system software, and application interfaces that will enable an overall system acceleration that comes from having faster NICs. Researchers will design, implement, and demonstrate 10 Tbps network interface hardware using existing or road-mapped hardware interfaces. The hardware solutions must attach to servers via one or more industry-standard interface points, such as I/O buses, multiprocessor interconnection networks and memory slots to support the rapid transition of FastNICs technology. “It starts with the hardware; if you cannot get that right, you are stuck. Software can’t make things faster than the physical layer will allow so we have to first change the physical layer,” said Smith. The next step would be developing system software to manage FastNICs hardware. The open-source software based on at least one open-source OS would enable faster, parallel data transfer between network hardware and applications.


Why DevOps underscores the importance of software testing

Continuous testing is definitely a hot area right now. We hear a lot about it, when we're out and about speaking with customers and all. Obviously, you've got a variety of roles if you're going to make continuous testing work. And I know there's a lot of definitions out there, so maybe I should start with my definition, which is that continuous testing is really the practice of testing across the entire lifecycle. The goal there is to place testing and do testing at the right time in the right place, where you're going to uncover any defects or unexpected behaviors quickly, and resolve them, obviously, but most importantly help the business make good decisions. So, I think there's a lot of roles in that definition. You've obviously got your traditional software testers -- they might be manual, they might be automated, we can talk about manual versus automated -- they're obviously playing a critical role in continuous testing, but so are your developers. Because we really do expect and want developers to be involved in the testing process, [for] at least the unit tests level.


Chinese cyberespionage group PKPLUG uses custom and off-the-shelf tools

CSO slideshow - Insider Security Breaches - Flag of China, binary code
What makes this group stand apart is its use of both off-the-shelf and custom-made malware tools. This includes publicly available Trojan programs like PlugX -- from where the group’s name is derived -- and Poison Ivy. One of PKPLUG’s common tactics is to deliver the PlugX malware inside a ZIP archive that has the “PK” ASCII in its header. The group also makes heavy use of DLL side-loading to execute its malicious payloads. This type of attack occurs when a legitimate program searches for a DLL library by name in various locations, including the current folder, and automatically loads it in memory. If attackers replace the library with a malicious one, the malware will be loaded and executed instead. This decreases the payload’s chance of being detected, since the process that performs the loading is not malicious itself. The group favors spear-phishing emails to deliver their payloads and use social engineering to trick users into opening attachments. However, some limited use of Microsoft Office exploits has also been observed and so has the use of malicious PowerShell scripts.


Why TypeScript?

TypeScript compiler does not really mandate such type definition. IDE like Visual Studio and Angular CLI do care because they provide design time type checking and compiler type checking in restricted mode. The overhead of declaring types will be soon paid off, since you will have design time type checking and compiling time type checking which will boost your productivity when constructing complex structures and workflows. I am well aware there exist super smart JavaScript developers who can also construct complex structures and workflows at high productivity (including quality), however, I tend to think, probably they are among only top 1% of JavaScript developers in trade. Even if you are super smart, why would you consume your smarty to do type checking while IDE and compiler can do the type checking for you? One of the primary reasons why inventing TypeScript was for developing sophisticated tooling for software development.



Quote for the day:


"The essence of leadership is the willingness to make the tough decisions. Prepared to be lonely." - Colin Powell


No comments:

Post a Comment