Java, like COBOL, is relatively easy to read and write. Also like COBOL, Java has maintained its modernity. Every time Java looked like it was fading, something has perked it up. According to Brian Leroux, Android is definitely partially responsible for [Java’s] continued relevance.” A bit later, big data revived Java further. As Nitin Borwankar has highlighted, “Java received [a] second wind due to Hadoop and the whole data science ecosystem including Hive, HBase, Spark, Cassandra, Kafka, and JVM languages such as Groovy and Clojure. All of that is not going away anytime soon.” Indeed, as with COBOL, one of the primary reasons we’re likely to see Java etched on our headstones is because, as Jonathan Eunice writes, it’s “deployed deeply and widely in critical apps, making it worthy of systematic critique.” The more enterprises embed Java into their most mission-critical apps, the less likely it is to be ripped and replaced for modern alternatives. The cost and risk mitigate against doing so. In like manner, Python may well prove its staying power. To Lauren Cooney’s mind, Python will endure because it’s a “GSD [get stuff done] language vs. a cool language.”
As the IoT is enabling the deployment of lower-cost sensors, we are seeing broader adoption of intelligent systems and gaining more visibility (and data) into smart environments. Not only are smart environments generating more data, but they are also producing different types of data with an increase in the number of multimedia devices deployed such as vehicle and traffic cameras. The emergence of the Internet of Multimedia Things (IoMT) is resulting in large quantities of high-volume and high-velocity multimedia event streams that need to be processed. The result is a data-rich ecosystem of structured and unstructured data (i.e., images, video, audio, and even text) detailing the smart environment that can be exploited by data-driven techniques. The increased availability of data has opened the door to the use of the data-driven probabilistic models, and their use within smart environments is becoming increasingly commonplace for “good enough” scenarios
One school of thought says that continuous improvement should exist as a new mission for existing EA approaches, not a stand-alone process with tools and software. Continuous improvement simply formalizes how process feedback, a staple of EA-driven businesses, drives change to business processes. In this approach, the organization relies on enterprise architects' well-established integration with software selection and development, so there is no need for new software quality tasks or tools to build out a continuous improvement process. Continuous improvement, at most, might reveal a lack of organization in the way the organization manages feedback. EA teams shouldn't need much additional help from software or changed practices to make continuous improvement work. In a second school of thought, traditional EA work establishes a business process baseline. Once there's a baseline, the continuous improvement process helps keep it aligned with business efficiency and opportunity signals.
Utility providers were caught out by a rudimentary phishing scam involving a shoe retailer and a former member of the pop group McFly. The scam email is short, with someone called ‘Adam’ providing a PDF attachment containing remittance advice. This is despite the fact that the email comes from Friary Shoes. ... Once users click the attachment, it unleashes Adwind, a type of spyware that: Takes screenshots; Harvests credentials from Chrome, Internet Explorer and Microsoft Edge; Records video and audio; Takes photos; Steals files; Performs keylogging; Reads emails; and Steals VPN certificates. Experts found that Adwind was available as Spyware as a Service, meaning anyone willing to pay for it could use the malware to target organisations. ... the common denominator is your staff. They are the ones who are targeted, and once they open a phishing email, the only thing preventing a data breach is their ability to spot that it’s a scam. Fortunately, there are always clues that reveal the true nature of malicious emails, and our Phishing Staff Awareness E-Learning Course teaches you how to spot them.
As the entire world is looking forward to harness the potential of AI for the growth of the industry and for the betterment of society, India has already taken steps to leverage the potential of AI in all walks of life. To foster AI-led growth across all the sections of the society, the government is taking steps to promote Indian tech talent and skills to achieve national goals. With a unique vision of “AI for All”, India can enhance and empower human capabilities to address the challenges of accessibility, affordability and quality of skilled expertise, which, in turn, can help develop scalable solutions for emerging technologies by leveraging collaborations and partnerships among various stakeholders including industry, industry associations, academia, and state and central governments. Besides collaborations, this segment requires a lot of incentivisation and funding support from various stakeholders. As this technology requires massive R&D and innovation, and a highly-skilled manpower for the creation of world-class products, hence sufficient funding is necessary to exploit this technology to its fullest extent.
The rate of technological advancement — especially in the data management space — has been astronomical over the last five years. Enterprises find themselves in transit between legacy physical servers and cloud storage, all while streaming data in real-time from various sources in a wide array of formats and file types. These changes provide many unique challenges that can be difficult to navigate without the right data platform. This constant state of flux is the new normal, as technological advancements show no signs of slowing down. Enterprises need to not only adjust to the current evolving data ecosystem but future proof for what tomorrow will bring. At the core of all of this has to be a strong foundation for building around, and adapting to various technologies, architectures, and frameworks. That’s where the idea of constructing a data fabric comes into play. Thinking about interconnectivity, ease of access, clean reliable data and data governance, it cannot be an afterthought. Companies that try to implement a patchwork approach will find themselves unable to adapt, adding new layers of complexity to their data ecosystem each time a new piece of technology is introduced.
Confidential computing may take multiple forms, but early use cases rely on trusted execution environments (TEE), also called trusted enclaves, where data and operations are isolated and protected from any other software, including the operating system and cloud service stack. Combined with encrypted data storage and transmission methods, TEEs can create an end-to-end protection architecture for your most sensitive data. Enterprises and cloud service providers can apply confidential computing to a wide range of workloads. The most popular of the early use cases use the trusted enclave for key protection and crypto-operations. But trusted enclaves can be used to protect any type of highly sensitive information. For example, healthcare analytics can be performed so that the enclave protects any data that may contain personally identifiable information, thus keeping results anonymous. Companies that wish to run their applications in the public cloud but don’t want their most valuable software IP visible to other software or the cloud provider can run their proprietary algorithms inside an enclave.
People naturally hate change. It is a law of nature. You can't change that, but you can show people the value of the change so the desire to change overcomes natural resistance. Changing culture can be especially challenging, but it has to be done to properly implement a DevOps culture. As the famed business strategist Peter Drucker said, "Culture eats strategy for breakfast." ... The Authority-to-Operate (ATO) is a specific term for the U.S. government, but the concept can be found across organizations. In essence, security needs to approve an application, or even a subset of a larger application, to operate on the system. ... Janek and Svetlana are implementing DevOp-as-a-Service to solve the aforementioned challenges. It involves a team and a platform with the goal to continuously increase the number and scale of self-service capabilities. Svetlana dove into some specifics of the platform, Monarch, during the presentation. Monarch is a platform that GDIT created for their government client. It is a self-service platform that development teams can use to build Docker containers and deploy them into AWS Elastic Container Service.
Fileless techniques can be extremely advanced, and they are harder for traditional antivirus software to detect. But not every advanced malware attack is fileless and throwing the term around doesn't help organisations defend against it, Tanmay Ganacharya told TechRepublic. Ganacharya runs the Microsoft Defender threat research team, which analyses new threats and builds models to detect them. "Fileless is such an overused term, and it has gone from the truly fileless threats, to now people wanting to call almost everything that is even slightly advanced fileless and making it slightly buzzwordy," he says. To demystify the term, the threat research team started categorising fileless attacks based on how they get onto a PC and where they're hosted. There are more than a dozen combinations of those 'entry points' and malware hosts being used for fileless attacks — some of which are very sophisticated and are rarely used for targeted attacks, and some of which have been commoditised and are showing up more often for common attacks like trying to run a coin miner on your system. But they fall into three broad groups.
Because IoT objects can lack comprehensive input validation mechanisms, extending the coverage of test payloads is desirable. A widely used method, fuzz testing, employs randomly generated payloads, but this is ine cient due to resources wasted on meaningless inputs. An alternative is to exhaustively or randomly generate syntax-correct inputs. This method provides better test coverage but is still inefficient, as the space of syntaxcorrect inputs is usually large. Intelligently mutating known payloads is a compromise between manual testing and exhaustive/random testing. Combining existing evasion techniques provides greater ability to circumvent validation mechanisms. In this case, conflicting or overlapping techniques should be manipulated carefully to prune unnecessary test cases. On the other hand, converting payloads to syntactically or semantically equivalent payloads is worthy of further investigation. Syntactic mutation generates payloads with slight changes at the syntax level.
Quote for the day:
"Brilliant strategy is the best route to desirable ends with available means." - Max McKeown