Showing posts with label iPaaS. Show all posts
Showing posts with label iPaaS. Show all posts

Daily Tech Digest - May 29, 2024

Algorithmic Thinking for Data Scientists

While data scientists with computer science degrees will be familiar with the core concepts of algorithmic thinking, many increasingly enter the field with other backgrounds, ranging from the natural and social sciences to the arts; this trend is likely to accelerate in the coming years as a result of advances in generative AI and the growing prevalence of data science in school and university curriculums. ... One topic that deserves special attention in the context of algorithmic problem solving is that of complexity. When comparing two different algorithms, it is useful to consider the time and space complexity of each algorithm, i.e., how the time and space taken by each algorithm scales relative to the problem size (or data size). ... Some algorithms may manifest additive or multiplicative combinations of the above complexity levels. E.g., a for loop followed by a binary search entails an additive combination of linear and logarithmic complexities, attributable to sequential execution of the loop and the search routine, respectively.


Job seekers and hiring managers depend on AI — at what cost to truth and fairness?

The darker side to using AI in hiring is that it can bypass potential candidates based on predetermined criteria that don’t necessarily take all of a candidate’s skills into account. And for job seekers, the technology can generate great-looking resumes, but often they’re not completely truthful when it comes to skill sets. ... “AI can sound too generic at times, so this is where putting your eyes on it is helpful,” Toothacre said. She is also concerned about the use of AI to complete assessments. “Skills-based assessments are in place to ensure you are qualified and check your knowledge. Using AI to help you pass those assessments is lying about your experience and highly unethical.” There’s plenty of evidence that genAI can improve resume quality, increase visibility in online job searches, and provide personalized feedback on cover letters and resumes. However, concerns about overreliance on AI tools, lack of human touch in resumes, and the risk of losing individuality and authenticity in applications are universal issues that candidates need to be mindful of regardless of their geographical location, according to Helios’ Hammell.


Comparing smart contracts across different blockchains from Ethereum to Solana

Polkadot is designed to enable interoperability among various blockchains through its unique architecture. The network’s core comprises the relay chain and parachains, each playing a distinct role in maintaining the system’s functionality and scalability. ... Developing smart contracts on Cardano requires familiarity with Haskell for Plutus and an understanding of Marlowe for financial contracts. Educational resources like the IOG Academy provide learning paths for developers and financial professionals. Tools like the Marlowe Playground and the Plutus development environment aid in simulating and testing contracts before deployment, ensuring they function as intended. ... Solana’s smart contracts are stateless, meaning the contract logic is separated from the state, which is stored in external accounts. This separation enhances security and scalability by isolating the contract code from the data it interacts with. Solana’s account model allows for program reusability, enabling developers to create new tokens or applications by interacting with existing programs, reducing the need to redeploy smart contracts, and lowering costs.


3 things CIOs can do to make gen AI synch with sustainability

“If you’re only buying inference services, ask them how they can account for all the upstream impact,” says Tate Cantrell, CTO of Verne, a UK-headquartered company that provides data center solutions for enterprises and hyperscalers. “Inference output takes a split second. But the only reason those weights inside that neural network are the way they are is because of massive amounts of training — potentially one or two months of training at something like 100 to 400 megawatts — to get that infrastructure the way it is. So how much of that should you be charged for?” Cantrell urges CIOs to ask providers about their own reporting. “Are they doing open reporting about the full upstream impact that their services have from a sustainability perspective? How long is the training process, how long is it valid for, and how many customers did that weight impact?” According to Sundberg, an ideal solution would be to have the AI model tell you about its carbon footprint. “You should be able to ask Copilot or ChatGPT what the carbon footprint of your last query is,” he says. 


EU’s ChatGPT taskforce offers first look at detangling the AI chatbot’s privacy compliance

The taskforce’s report discusses this knotty lawfulness issue, pointing out ChatGPT needs a valid legal basis for all stages of personal data processing — including collection of training data; pre-processing of the data (such as filtering); training itself; prompts and ChatGPT outputs; and any training on ChatGPT prompts. The first three of the listed stages carry what the taskforce couches as “peculiar risks” for people’s fundamental rights — with the report highlighting how the scale and automation of web scraping can lead to large volumes of personal data being ingested, covering many aspects of people’s lives. It also notes scraped data may include the most sensitive types of personal data (which the GDPR refers to as “special category data”), such as health info, sexuality, political views etc, which requires an even higher legal bar for processing than general personal data. On special category data, the taskforce also asserts that just because it’s public does not mean it can be considered to have been made “manifestly” public — which would trigger an exemption from the GDPR requirement for explicit consent to process this type of data.


Avoiding the cybersecurity blame game

Genuine negligence or deliberate actions should be handled appropriately, but apportioning blame and meting out punishment must be the final step in an objective, reasonable investigation. It should certainly not be the default reaction. So far, so reasonable, yes? But things are a little more complicated than this. It’s all very well saying, “don’t blame the individual, blame the company”. Effectively, no “company” does anything; only people do. The controls, processes and procedures that let you down were created by people – just different people. If we blame the designers of controls, processes and procedures… well, we are just shifting blame, which is still counterproductive. ... Managers should use the additional resources to figure out how to genuinely change the work environment in which employees operate and make it easier for them to do their job in a secure practical manner. Managers should implement a circular, collaborative approach to creating a frictionless, safer environment, working positively and without blame.


The decline of the user interface

The Ok and Cancel buttons played important roles. A user might go to a Settings dialog, change a bunch of settings, and then click Ok, knowing that their changes would be applied. But often, they would make some changes and then think “You know, nope, I just want things back like they were.” They’d hit the Cancel button, and everything would reset to where they started. Disaster averted. Sadly, this very clear and easy way of doing things somehow got lost in the transition to the web. On the web, you will often see Settings pages without Ok and Cancel buttons. Instead, you’re expected to click an X in the upper right to make the dialog close, accepting any changes that you’ve made. ... In the newer versions of Windows, I spend a dismayingly large amount of time trying to get the mouse to the right spot in the corner or edge of an application so that I can size it. If I want to move a window, it is all too frequently difficult to find a location at the top of the application to click on that will result in the window being relocated. Applications used to have a very clear title bar that was easy to see and click on.


Lawmakers paint grim picture of US data privacy in defending APRA

At the center of the debate is the American Privacy Rights Act (APRA), the push for a federal data privacy law that would either simplify a patchwork of individual state laws – or run roughshod over existing privacy legislation, depending on which state is offering an opinion. While harmonizing divergent laws seems wise as a general measure, states like California, where data privacy laws are already much stricter than in most places, worry about its preemptive clauses weakening their hard-fought privacy protections. Rodgers says APRA is “an opportunity for a reset, one that can help return us to the American Dream our Founders envisioned. It gives people the right to control their personal information online, something the American people overwhelmingly want,” she says. “They’re tired of having their personal information abused for profit.” From loose permissions on sharing location data to exposed search histories, there are far too many holes in Americans’ digital privacy for Rodgers’ liking. Pointing to the especially sensitive matter of childrens’ data, she says that “as our kids scroll, companies collect nearly every data point imaginable to build profiles on them and keep them addicted. ...”


Picking an iPaaS in the Age of Application Overload

Companies face issues using proprietary integration solutions, as they end up with black-box solutions with limited flexibility. For example, the inability to natively embed outdated technology into modern stacks, such as cloud native supply chains with CI/CD pipelines, can slow down innovation and complicate the overall software delivery process. Companies should favor iPaaS technologies grounded in open source and open standards. Can you deploy it to your container orchestration cluster? Can you plug it into your existing GitOps procedures? Such solutions not only ensure better integration into proven QA-tested procedures but also offer greater freedom to migrate, adapt and debug as needs evolve. ... As organizations scale, so too must their integration solutions. Companies should avoid iPaaS solutions offering only superficial “cloud-washed” capabilities. They should prioritize cloud native solutions designed from the ground up for the cloud, and that leverage container orchestration tools like Kubernetes and Docker Swarm, which are essential for ensuring scalability and resilience.
Shifting left is a cultural and practice shift, but it also includes technical changes to how a shared testing environment is set up. ... The approach scales effectively across engineering teams, as each team or developer can work independently on their respective services or features, thereby reducing dependencies. While this is great advice, it can feel hard to implement in the current development environment: If the process of releasing code to a shared testing cluster takes too much time, it doesn’t seem feasible to test small incremental changes. ... The difference between finding bugs as a user and finding them as a developer is massive: When an operations or site reliability engineer (SRE) finds a problem, they need to find the engineer who released the code, describe the problem they’re seeing, and present some steps to replicate the issue. If, instead, the original developer finds the problem, they can cut out all those steps by looking at the output, finding the cause, and starting on a fix. This proactive approach to quality reduces the number of bugs that need to be filed and addressed later in the development cycle.



Quote for the day:

"The best and most beautiful things in the world cannot be seen or even touched- they must be felt with the heart." -- Helen Keller

Daily Tech Digest - December 11, 2018

Using a password manager: 7 pros and cons

login password - user permissions - administrative control
NIST SP 800-63 recommends using non-password methods where possible, and although the recommendations are definitely against forcing users to use very long and complex passwords, they don’t limit password length or complexity. When people are forced to create and use long, complex, and frequently changing passwords, they do a poor job at it. They reuse the same passwords among different websites or use only slightly different passwords, which create an easy-to-decipher pattern. If those same humans use MFA or other non-memorization authentication methods, then the overall risk of repeated passwords and patterns can be broken. If a person can use a password manager, which creates and uses long and complex passwords that the person doesn’t have to remember, then perhaps you can get the best of both worlds. Until recently, I had never completely depended on them, throwing all my memorized passwords away. I felt bad about recommending them without “living” with them. 



Facebook Filed A Patent To Calculate Your Future Location

Another Facebook patent application titled “Location Prediction Using Wireless Signals on Online Social Networks” describes how tracking the strength of Wi-Fi, Bluetooth, cellular, and near-field communication (NFC) signals could be used to estimate your current location, in order to anticipate where you will go next. This “background signal” information is used as an alternative to GPS because, as the patent describes, it may provide “the advantage of more accurately or precisely determining a geographic location of a user.” The technology could learn the category of your current location (e.g., bar or gym), the time of your visit to the location, the hours that entity is open, and the popular hours of the entity. For example, in the map below that demonstrates how the tech would work, Facebook would see that you are in geographic location 302 — and it could predict you’d be likely to go to locations 304, 306, and 308 next, based on places you’ve visited before (maybe you’ve gone to Starbucks after visiting Walgreens) or on the travel behavior of other users the same age as you.


Be Prepared for Disruption: Thinking the New Unthinkables


The main conclusion is that the conformity — defined as adhering to conventional wisdom — that gets leaders to the top too often disqualifies them from grasping the scale and nature of disruption. Leaders are saddled with what Geoff Mulgan, chief executive of Nesta, a global innovation foundation in the United Kingdom, labels “zombie orthodoxies.” These leaders rise through the ranks listening and conforming to those like them. But disruption requires precisely the opposite: It needs leaders to think, and plan for, unthinkables. In order to do this, it is imperative to have a clear purpose and to embrace diversity, inclusivity, and new behaviors, which will help leaders understand and even anticipate the impact of disruption. It is an enormous Rubik’s Cube. As one top professional told us: Leaders today confront having to “eat an elephant in one mouthful.” This is not a case of trying to break down today's challenges into neat solutions.


IT strategy: How to be an influential digital leader

Like von Schirmeister, Gideon Kay -- who is European CIO at Dentsu Aegis Network -- says IT leaders must be alert to the fact that people on the board increasingly have a take on technology, just like they would on sales, marketing and operations. Kay says CIOs must see this new interest in digital transformation as an opportunity to influence. "You don't have to bite your lip," he says. "Once you've built your credibility, which you need to do pretty quickly, and providing you've built a reputation for explaining technology in the right way -- which is about talking in terms of the business and commercial impact -- then you can give the business the definitive line on technology." Kays says CIOs can use their experience to say which services the business should be worried about, and which are the ones that don't matter: "These are the things that are hot, and these are the things that are not," he explains.


How to tame enterprise communications services

How to tame enterprise communications services
Having an organization-wide communications policy in concert with both organizational objectives and IT capabilities is a first step, just as is the case with BYOD and security. Solutions must similarly be in concert with this policy, and with no exceptions. Once the communications policy is in place, a solution set can be assembled and aligned with the general framework we introduced above. In general, the process here will follow that which is typically applied to all IT services, including a requirements analysis, service set definition, long and short lists of candidate products and services (and, increasingly rarely, new internal development), and experiential analysis and evaluation via alpha and beta tests. The rollout of the solution must be accompanied by consciousness-raising, education, support, and monitoring for management visibility with respect to both the policy and the solution. Once again, IT must reinforce the importance of using only approved channels and facilities and avoiding difficult-to-impossible-to-monitor out-of-band solutions, including social media.



Is Blockchain A Solution For Securing Centralized ID Databases?

Clearly, the way that some centralized identity databases are currently secured doesn't work. I believe that technology industry professionals should think outside the box to create a security solution for centralized databases. Some think blockchain is the answer. They believe that a distributed ledger could be used to decentralize identity information. Using the blockchain, identity information could be stored securely using cryptography. This is similar to how cryptocurrencies are cryptographically stored in wallets on the blockchain. A wide variety of identity documents could be stored on the blockchain in a single place — an identity wallet of sorts — and each wallet could have its own form of encryption. The main advantage of doing this is that the identity information would become decentralized on a distributed ledger. This would make it a lot harder for cybercriminals to perform large-scale identity data breaches because they would have to hack into each wallet individually.


IT pros look to iPaaS tools for LOB integration demands


Application automation and integration are central to nearly every project these days at Wilbur-Ellis, a $3 billion holding company, with divisions in agribusiness, chemicals and feed. "If I look back on the last three major projects, they all involve a separate system that has to integrate," said Dan Willey, CIO at the San Francisco-based company. Many of these iPaaS tools are conceptually good for modern, cloud-based companies, but sometimes you are saddled with an application that doesn't play well. In the case of Wilbur-Ellis, an ERP system by Oracle's JD Edwards is a stumbling block, Willey said. Wilbur-Ellis uses Dell Boomi's connectors to connect customer and order data. The company will also use the tool in a broader sense as an API management platform. "It's a hard problem to solve," Willey said. "It's interchanging between your tool sets, data in your back-end systems, front-end systems, IoT data and other things that need to be lined up to make it happen."


CrowdStrike: More Organizations Now Self-Detect Their Own Cyberattacks

Three-quarters of enterprises this year discovered on their own they had been hacked rather than learning from a third party. The bad news: It took them an average of 85 days to spot an attack. That means hackers still have the upper hand. What's more, they only need less than two hours, on average, to move from the initially attacked machine to further inside a target's network, according to CrowdStrike, which today published its "Cyber Intrusion Services Casebook, 2018," a report on a sampling of its real-world incident response (IR) investigations for clients. "We noticed attackers this year were pretty brazen and stealthy: Eighty-six days [before getting discovered] is still a problem," even when victim organizations are getting better at self-detection, says Tom Etheridge, vice president of services for CrowdStrike. The number of hacked organizations that spotted their own attacks rose 7% this year over those from CrowdStrike Services' IR engagements in 2017.


The top skills needed by data scientists in 2019

The data analyst role is suited to most businesses. Able to convert business challenges into opportunities for data analysis, the analyst often bridges the gap between technical and practical. A machine learning engineer is looking to make an algorithm run quickly and in a distributed environment. Asking them to analyze data and find nuggets of relevant business insights isn’t their forte, but an ML engineer can select the appropriate algorithm and implement it within the company’s production system without introducing a bottleneck. A research data scientist is interested in investigating cutting-edge techniques or inventing new techniques. This role usually requires a Ph.D. Extreme familiarity with the underlying mathematics is a must. It’s important to note this type of individual contributor would be bored out of their mind working on everyday-business problems. The manager is the ultimate bridge between various technical roles, business stakeholders, and other leadership. Managers are frequently facilitating their teams’ best work while ensuring outcomes are mapped to business goals and prove ROI.


Satan Ransomware Variant Exploits 10 Server-Side Flaws

"There is a risk of extensive infections because [of the] big arsenal of vulnerabilities that [the malware] attempts to exploit," says Apostolos Giannakidis, security architect at Waratek, which also posted a blog on the threat. All of the vulnerabilities are easy to exploit, and actual exploits are publicly available for many of them that allow attackers to compromise vulnerable systems with little to no customization required, he says. Several of the vulnerabilities used by Lucky were disclosed just a few months ago, which means that the risk of infection is big for organizations that have not yet patched their systems, Giannakidis says. All but one of the server-side vulnerabilities that Lucky uses affect Java server apps. "The vulnerabilities that affect JBoss, Tomcat, WebLogic, Apache Struts 2, and Spring Data Commons are all remote code execution vulnerabilities that allow attackers to easily execute OS commands on any platform," he notes.



Quote for the day:



"Colors fade, temples crumble, empires fall, but wise words endure." -- Edward Thorndike