Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Author Posts

122 Articles
article-image-automate-your-microsoft-intune-tasks-with-graph-api
Andrew Taylor
19 Nov 2024
10 min read
Save for later

Automate Your Microsoft Intune Tasks with Graph API

Andrew Taylor
19 Nov 2024
10 min read
Why now is the time to start your automating journey with Intune and GraphWith more and more organizations moving to Microsoft Intune and IT departments under constant strain, automating your regular tasks is an excellent way to free up time to concentrate on your many other important tasks.When dealing with Microsoft Intune and the related Microsoft Entra tasks, everything clicked within the portal UI is sending API requests to Microsoft Graph which sits underneath everything and controls exactly what is happening and where.Fortunately, Microsoft Graph is a public API, therefore anything being carried out within the portal, can be scripted and automated.Imagine a world where by using automation, you log in to your machine in the morning, and there waiting for you is an email, or Teams message that ran overnight containing everything you need to know about your environment.  You can take this information to quickly resolve any new issues and be extra proactive with your user base, calling them to resolve issues before they have even noticed themselves.This is just the tip of the iceberg of what can be done with Microsoft Graph, the possibilities are endless.Microsoft Graph is a web API that can be accessed and manipulated via most programming and scripting languages, so if you have a preferred language, you can get started extremely quickly.  For those starting out, PowerShell is an excellent choice as the Microsoft Graph SDK includes modules that take the effort out of the initial connection and help write the requests.For those more experienced, switching to the C# SDK opens up more scalability and quicker performance, but ultimately it is the same web requests underneath so once you have the basic knowledge of the API, moving these skills between languages is much easier.When looking to learn the API, an excellent starting point is to use the F12 browser tools and select Network.  Then click around in the portal and have a look at the network traffic.This will be in the form of GET, POST, PUT, DELETE, and BATCH requests depending on what action is being performed.  Get is used to retrieve information and is one-way traffic, retrieving from Graph and returning to the client.POST and PUT are used to send data to Graph.DELETE is fairly self-explanatory and is used to delete records.BATCH is used to increase performance in more complex tasks.  This groups multiple Graph API calls into one command which reduces the calls and improves the performance.  It works extremely well, but starting with the more basic commands is always recommended.Once you have mastered Graph calls from a local device with interactive authentication, the next step is to create Entra App Registrations and run locally, but with non-interactive authentication.This will feed into true automation where tasks can be set to run without any user involvement, at this point learning about Azure Automation accounts and Azure Function and Logic Apps will prove incredibly useful.For larger environments, you can take it a step further and use Azure DevOps pipelines to trigger tasks and even implement approval processes.Some real-world examples of automation with Graph include new environment configuration, policy management, and application management, right through to documenting and monitoring policies. Once you have the basic knowledge of Graph API and PowerShell, it is simply a case of slotting them together and watching where the process takes you.  The learning never stops, before you know it you will be creating tools for your other IT staff to use to quickly retrieve passwords on the go, or do standard tasks without needing elevated privileges.Now, I know what you are thinking, this all sounds fantastic and exactly what I need, but how do I get started and how do I find the time to learn a new skill like this?We can start with time management.  I am sure throughout your career you have had to learn new software, systems, and technologies without any formal training and the best way to do that is by learning by doing.  The same can apply here, when you are completing your normal tasks, simply have the F12 network tools open and have a quick look at the URLs and requests being sent.If you can, try and find a few minutes per day to do so some practice scripts, ideally in a development environment, but if not, start with GET requests which cannot do any damage.To take it further and learn more about PowerShell, Graph, and Intune, check out my “Microsoft Intune Cookbook” which runs through creating a tenant from scratch, both in the portal and via Graph, including code samples for everything possible within the Intune portal.  You can use these samples to expand upon and meet your needs while learning about both Intune and Graph.Author BioAndrew Taylor is an End-User Compute architect with 20 years IT experience across industries and a particular interest in Microsoft Cloud technologies, PowerShell and Microsoft Graph. Andrew graduated with a degree in Business Studies in 2004 from Lancaster University and since then has obtained numerous Microsoft certifications including Microsoft 365 Enterprise Administrator Expert, Azure Solutions Architect Expert and Cybersecurity Architect Expert amongst others. He currently working as an EUC Architect for an IT Company in the United Kingdom, planning and automating the products across the EUC space. Andrew lives on the coast in the North East of England with his wife and two daughters.
Read more
  • 0
  • 0
  • 6365

article-image-wolf-halton-on-whats-changed-in-tech-and-where-we-are-headed
Guest Contributor
20 Jan 2019
4 min read
Save for later

Wolf Halton on what’s changed in tech and where we are headed

Guest Contributor
20 Jan 2019
4 min read
The tech industry is changing at a massive rate especially after the storage options moved to the cloud. However, this has also given rise to questions on security, data management, change in the work structure within an organization, and much more. Wolf Halton, an expert in Kali Linux, tells us about the security element in the cloud. He also touches upon the skills and knowledge that should be inculcated in your software development cycle in order to adjust to the dynamic tech changes at present and in the future. Following this, he juxtaposes the current software development landscape with the ideal one. Wolf, along with another Kali Linux expert Bo Weaver were also interviewed on why Kali Linux is the premier platform for testing and maintaining Windows security. They talked about the advantages and disadvantages for using Kali Linux for pentesting. We also asked them about what they think about pentesting in cybersecurity, in general. They have also talked about their stance on the role of pentesting in cybersecurity in their interview titled, “Security experts, Wolf Halton and Bo Weaver, discuss pentesting and cybersecurity” Security on Cloud The biggest change in the IT environment is how business leaders and others are implementing Cloud-Services agreements. It used to be a question of IF we would put some of our data or processes in the cloud, and now it is strictly a question of WHEN.  The Cloud is, first and foremost, a (failed) marketing term designed to obfuscate the actual relationship between the physical and logical networks.  The security protections cloud companies give you is very good from the cabling to the hypervisor, but above that, you are on your own in the realm of security.  You remain responsible for safeguarding your own data. The main difference between cloud architectures and on-premises architectures is that the cloud systems aren’t as front-loaded with hardware costs and software licensing costs. Why filling in the ‘skills gap’ is a must   The schools that teach the skills are often five or ten years behind in the technology they teach, and they tend to teach how to run tools rather than how to develop (and discard) approaches quickly.  Most businesses that can afford to have a security department want to hire senior-level security staff only. This makes a lot of sense, as the seniors are more likely to avoid beginner mistakes. If you only hire seniors, it forces apt junior security analysts to go through a lot of exploitative off-track employment before they are able to get into the field. Software development is not just about learning to code Development is difficult for a host of reasons – first off, there are only about 5% of the people who might want to learn to code, have access to the information, and can think abstractly enough to be able to code.  This was my experience in six years of teaching coding to college students majoring in computer networking (IT) and electrical engineering. It is about intelligence, yes, but of a group of equally intelligent people taught to code in an easy language like Python, only one in 20 will go past a first-year programming course. Security is an afterthought for IoT developers The internet if things (IoT) has created a huge security problem, which the manufacturers do not seem to be addressing responsibly.  IoT devices have a similar design flaw as that, which has informed all versions of Windows to this day. Windows was designed to be a personal plaything for technology-enthusiasts who couldn’t get time on the mainframes available at the time.  Windows was designed as a stand-alone, non-networked device. NT3.0 brought networking and “enterprise server” Windows, but the monolithic way that Windows is architected, along with the direct to kernel-space attachment of third-party services continues to give Windows more than its share of high and critical vulnerabilities. IoT devices are cheap for computers and since security is an afterthought for most developers, the IoT developers create marvelously useful devices with poor or nonexistent user authentication.  Expect it to get worse before it gets better (if it ever gets better). Author Bio Wolf Halton is an Authority on Computer and Internet Security, a best selling author on Computer Security, and the CEO of Atlanta Cloud Technology. He specializes in—business continuity, security engineering, open source consulting, marketing automation, virtualization and data center restructuring, network architecture, and Linux administration. Pentest tool in focus: Metasploit Kali Linux 2018.2 released How artificial intelligence can improve pentesting
Read more
  • 0
  • 0
  • 6305

article-image-simplifying-ai-pipelines-using-the-fti-architecture
Paul Iusztin
08 Nov 2024
15 min read
Save for later

Simplifying AI pipelines using the FTI Architecture

Paul Iusztin
08 Nov 2024
15 min read
IntroductionNavigating the world of data and AI systems can be overwhelming.Their complexity often makes it difficult to visualize how data engineering, research (data science and machine learning), and production roles (AI engineering, ML engineering, MLOps) work together to form an end-to-end system.As a data engineer, your work finishes when standardized data is ingested into a data warehouse or lake.As a researcher, your work ends after training the optimal model on a static dataset and registering it.As an AI or ML engineer, deploying the model into production often signals the end of your responsibilities.As an MLOps engineer, your work finishes when operations are fully automated and adequately monitored for long-term stability.But is there a more intuitive and accessible way to comprehend the entire end-to-end data and AI ecosystem?Absolutely—through the FTI architecture.Let’s quickly dig into the FTI architecture and apply it to a production LLM & RAG use case. Figure 1: The mess of bringing structure between the common elements of an ML system.Introducing the FTI architectureThe FTI architecture proposes a clear and straightforward mind map that any team or person can follow to compute the features, train the model, and deploy an inference pipeline to make predictions.The pattern suggests that any ML system can be boiled down to these 3 pipelines: feature, training, and inference.This is powerful, as we can clearly define the scope and interface of each pipeline. Ultimately, we have just 3 instead of 20 moving pieces, as suggested in Figure 1, which is much easier to work with and define.Figure 2 shows the feature, training, and inference pipelines. We will zoom in on each one to understand its scope and interface.Figure 2: FTI architectureBefore going into the details, it is essential to understand that each pipeline is a separate component that can run on different processes or hardware. Thus, each pipeline can be written using a different technology, by a different team, or scaled differently.The feature pipelineThe feature pipeline takes raw data as input, processes it, and outputs the features and labels required by the model for training or inference.Instead of directly passing them to the model, the features and labels are stored inside a feature store. Its responsibility is to store, version, track, and share the features.By saving the features into a feature store, we always have a state of our features. Thus, we can easily send the features to the training and inference pipelines.The training pipelineThe training pipeline takes the features and labels from the features stored as input and outputs a trained model(s).The models are stored in a model registry. Its role is similar to that of feature stores, but the model is the first-class citizen this time. Thus, the model registry will store, version, track, and share the model with the inference pipeline.The inference pipelineThe inference pipeline takes as input the features and labels from the feature store and the trained model from the model registry. With these two, predictions can be easily made in either batch or real-time mode.As this is a versatile pattern, it is up to you to decide what you do with your predictions. If it’s a batch system, they will probably be stored in a DB. If it’s a real-time system, the predictions will be served to the client who requested them.The most important thing you must remember about the FTI pipelines is their interface. It doesn’t matter how complex your ML system gets — these interfaces will remain the same.The final thing you must understand about the FTI pattern is that the system doesn’t have to contain only 3 pipelines. In most cases, it will include more.For example, the feature pipeline can be composed of a service that computes the features and one that validates the data. Also, the training pipeline can comprise the training and evaluation components.Applying the FTI architecture to a use caseThe FTI architecture is tool-agnostic, but to better understand how it works, let’s present a concrete use case and tech stack.Use case: Fine-tune an LLM on your social media data (LinkedIn, Medium, GitHub) and expose it as a real-time RAG application. Let’s call it your LLM Twin.As we build an end-to-end system, we split it into 4 pipelines:The data collection pipeline (owned by the DE team)The FTI pipelines (owned by the AI teams)As the FTI architecture defines a straightforward interface, we can easily connect the data collection pipeline to the ML components through a data warehouse, which, in our case, is a MongoDB NoSQL DB.The feature pipeline (the second ML-oriented data pipeline) can easily extract standardized data from the data warehouse and preprocess it for fine-tuning and RAG.The communication between the two is done solely through the data warehouse. Thus, the feature pipeline isn’t aware of the data collection pipeline and how it collected the raw data. Figure 3: LLM Twin high-level architectureThe feature pipeline does two things:chunks, embeds and loads the data to a Qdrant vector DB for RAG;generates an instruct dataset and loads it into a versioned ZenML artifact.The training pipeline ingests a specific version of the instruct dataset, fine-tunes an open-source LLM from HuggingFace, such as Llama 3.1, and pushes it to a HuggingFace model registry.During the research phase, we use a Comet ML experiment tracker to compare multiple fine-tuning experiments and push only the best one to the model registry.During production, we can automate the training job and use our LLM evaluation strategy or canary tests to check if the new LLM is fit for production.As the input dataset and output model registry are decoupled, we can quickly launch our training jobs using ML platforms like AWS SageMaker.ZenML orchestrates the data collection, feature, and training pipelines. Thus, we can easily schedule them or run them on demand orThe end-to-end RAG application is implemented in the inference pipeline side, which accesses fresh documents from the Qdrant vector DB and the latest model from the HuggingFace model registry.Here, we can implement advanced RAG techniques such as query expansion, self-query and rerank to improve the accuracy of our retrieval step for better context during the generation step.The fine-tuned LLM will be deployed to AWS SageMaker as an inference endpoint. Meanwhile, the rest of the RAG application is hosted as a FastAPI server, exposing the end-to-end logic as REST API endpoints.The last step is to collect the input prompts and generated answers with a prompt monitoring tool such as Opik to evaluate the production LLM for things such as hallucinations, moderation or domain-specific problems such as writing tone and style.SummaryThe FTI architecture is a powerful mindmap that helps you connect the dots in the complex data and AI world, as illustrated in the LLM Twin use case.Unlock the full potential of Large Language Models with the "LLM Engineer's Handbook" by Paul Iusztin and Maxime Labonne. Dive deeper into real-world applications, like the FTI architecture, and learn how to seamlessly connect data engineering, ML pipelines, and AI production. With practical insights and step-by-step guidance, this handbook is an essential resource for anyone looking to master end-to-end AI systems. Don’t just read about AI—start building it. Get your copy today and transform how you approach LLM engineering!Author BioPaul Iusztin is a senior ML and MLOps engineer at Metaphysic, a leading GenAI platform, serving as one of their core engineers in taking their deep learning products to production. Along with Metaphysic, with over seven years of experience, he built GenAI, Computer Vision and MLOps solutions for CoreAI, Everseen, and Continental. Paul's determined passion and mission are to build data-intensive AI/ML products that serve the world and educate others about the process. As the Founder of Decoding ML, a channel for battle-tested content on learning how to design, code, and deploy production-grade ML, Paul has significantly enriched the engineering and MLOps community. His weekly content on ML engineering and his open-source courses focusing on end-to-end ML life cycles, such as Hands-on LLMs and LLM Twin, testify to his valuable contributions.
Read more
  • 0
  • 0
  • 6086

article-image-bo-weaver-on-cloud-security-skills-gap-and-software-development-in-2019
Guest Contributor
19 Jan 2019
6 min read
Save for later

Bo Weaver on Cloud security, skills gap, and software development in 2019

Guest Contributor
19 Jan 2019
6 min read
Bo Weaver, a Kali Linux expert shares his thoughts on the security landscape in the cloud. He also talks about the skills gap in the current industry and why hiring is a tedious process. He explains the pitfalls in software development and where the tech is heading currently. Bo, along with another Kali Linux expert Wolf Halton were also interviewed on why Kali Linux is the premier platform for testing and maintaining Windows security. They talked about advantages and disadvantages for using Kali Linux for pentesting. We also asked them about what they think about pentesting in cybersecurity, in general. They have also talked about their stance about the role of pentesting in cybersecurity in their interview titled, “Security experts, Wolf Halton and Bo Weaver, discuss pentesting and cybersecurity” First is “The Cloud”.   I laugh and cry at this term.  I have a sticker on my laptop that says “There is no Cloud….  Only other people’s computers.”  Your data is sitting on someone else’s system along with other people’s data.  These other people also have access to this system. Sure security controls are in place but the security of “physical access” has been bypassed. [box type="shadow" align="" class="" width=""]You’re “in the box”.  One layer of security is now gone.  [/box] Also, your vendor has “FULL ACCESS” to your data in some cases.  How can you be sure what is going on with your data when it is in an unknown box in an unknown data center?  The first rule of security is “Trust No One”. Do you really trust Microsoft, Amazon, or Google? I sure don’t!!!  Having your data physically out of your company’s control is not a good idea. Yes, it is cheaper but what are your company and its digital property worth? The ‘real’ skills and hiring gap in tech For the knowledge and skills gap, I see the hiring process in this industry as the biggest gap.  The knowledge is out there. We now have schools that teach this field. When I started, there were no school courses.  You learned on your own. Since there is training, there are a lot of skilled people out there. But go looking for a job, and it is a nightmare.  IT doesn’t do the actual hiring these days. Either HR or a headhunting agency does the hiring. [box type="shadow" align="" class="" width=""]The people you talk to have no clue of what you really do.  They have a list of acronyms to look for that they have no clue about to match to your resume.  If you don’t have the complete list, you’re booted.[/box] If you don’t have a certain certification, you’re booted even if you’ve worked with the technology for 10 years.  Once, with my skill level, it took sending out over a thousand resumes and took over a year for me to find a job in the security field. The people you talk to have no clue of what you really do.  They have a list of acronyms to look for that they have no clue about to match to your resume.  If you don’t have the complete list, you’re booted. Also, when working in security, you can’t really talk about what you exactly did in your last job due to the NDA agreements with HR or a headhunter.  Writing these books is the first time I have been able to talk about what I do in detail since the network being breached is a lab network owned by myself.  XYZ bank would be really mad if I published their vulnerabilities and rightly so. In the US, most major networks are not managed by actual engineers but are managed by people with an MBA degree.  The manager has no clue of what they are actually managing. These managers are more worried about their department’s P&L statement than the security of the network.  In Europe, you find that the IT managers are older engineers that WORKED for years in IT and then moved to management. They fully understand the needs of a network and security. The trouble with software development In software development, I see a dumbing down of user interfaces.  This may be good for my 6-year-old grandson, but someone like me may want more access to the system.  I see developers change things just for the reason of “change”. Take Microsoft’s Ribbon in Office. Even after all these years, I find the ribbon confusing and hard to use.  At least, with Libre Office, they give you a choice between a ribbon and an old school menu bar. The changes in Gnome 3 from Gnome 2. This dumbing down and attempting to make a desktop usable for a tablet and a mouse totally destroyed the usability of their desktop.  What used to take 1 click now takes 4 clicks to do. [box type="shadow" align="" class="" width=""] A lot of developers these days have an “I am God and you are an idiot” complex.  Developers should remember that without an engineer, there would be no system for their code to run on.  Listen and work with engineers.[/box] Where do I see tech going?   Well, it is in everything these days and I don’t see this changing.  I never thought I would see a Linux box running a refrigerator or controlling my car, yet we do have them today.   Today, we can buy a system the size of a pack of cigarettes for less than $30.00 (Raspberry Pi) that can do more than a full-size server could do 10 years ago.  This is amazing. However, this is a two-edged sword when it comes to small, “cheap” devices. When you build a cheap device, you have to keep the costs down. For most companies building these devices, security is either non-existent or is an afterthought.  Yes, your whole network can be owned by first accessing that $30.00 IP camera with no security and moving on from there to eventually your Domain Controller. I know this works; I’ve done it several times. If you wish to further learn about tools which can improve your average in password acquisition, from hash cracking, online attacks, offline attacks, and rainbow tables to social engineering, the book Kali Linux 2018: Windows Penetration Testing - Second Edition is the go-to option for you. Author Bio Bo Weaver is an old school, ponytailed geek. His first involvement with networks was in 1972 while in the US Navy working on an R&D project called ARPA NET. Bo has been working with and using Linux daily since the 1990s and a promoter of Open Source. (Yes, Bo runs on Linux.) He now works as the senior penetration tester and security researcher for CompliancePoint an Atlanta based security consulting company. Cloud Security Tips: Locking Your Account Down with AWS Identity Access Manager (IAM) Cloud computing trends in 2019 Jeff Weiner talks about technology implications on society, unconscious bias, and skill gaps: Wired 25
Read more
  • 0
  • 0
  • 6069

article-image-kong-cto-marco-palladino-on-how-the-platform-is-paving-the-way-for-microservices-adoption-interview
Richard Gall
29 Jul 2019
11 min read
Save for later

Kong CTO Marco Palladino on how the platform is paving the way for microservices adoption [Interview]

Richard Gall
29 Jul 2019
11 min read
“The service control platform is the next-gen of traditional API management,” Kong CTO and co-founder Marco Palladino tells me. “It’s not about APIs any more, it’s about services.” This shift in the industry is what makes Kong so interesting. It’s one of the reasons I wanted to speak to Palladino. Its success is an index of businesses’ technological priorities today, and useful as an indicator of the way the world is going - one, it’s safe to say, that’s increasingly cloud-native and highly distributed. As part of a broad and growing cloud-native ecosystem, Kong is playing an important role in the digital transformation of thousands of companies around the world. Furthermore, the fact that it follows an open core model, with an open source version of Kong made available in 2015, underlines the way in which the platform occupies a valuable position in the valley between developer enablement and managerial control. This isn’t always an easy place to be. 'Digital transformation' is a well-worn phrase, but behind it is the messy truth about the reality of how companies use technology: at their own pace and often shaped by necessity rather than best practice. So, with Kong a useful proxy for the state of the software industry, I wanted to dive deeper into Kong’s challenges, as well as the opportunities the platform can potentially unlock for its users. What is Kong? Before going any further it’s probably worth explaining what Kong actually is. Essentially, Kong is an API management platform - it allows teams to manage how services interact and move within their architecture. [caption id="attachment_29326" align="alignright" width="248"] via konghq.com[/caption] “APIs allow information to be in flight within our systems,” Palladino explains. Information can, he continues, either be “at rest in a database” or “in use by a monolith or microservice.” Naturally then, it follows that “the more we decouple - the more we distribute our applications - the more information will be… in flight.” This is why Palladino believes Kong is so valuable today. The “flight” of information (he never uses the word “data”) necessarily implies a network and, as anyone familiar with L. Peter Deutsch’s 7 Fallacies of Distributed Computing will know, “the network is unreliable.” “So how do we protect that communication? How do we secure it? How do we connect it? How do we route that transmission?” Palladino says. “The more we decouple, the more we distribute, the more those problems become critical, if not essential, for a successful microservice organization… what Kong provides is a platform that allows us to intelligently broker the flow of information across the organization.” Why does the world need Kong? Do we really need another API management solution? The short answer to this is relatively straightforward: the world is moving toward (micro)services and Kong provides you with a way of managing them. This control is crucial, moreover, because “in microservices, being slow is the new down - if you’re slow, you’re down.” But that’s only half of the picture. This “new world” is still in development and transition with each organization following its own technological path. Kong is necessary because it supports and facilitates these unique transitions, all of them happening in different ways around the world. “Kong is a platform agnostic system that can run across different architectures, but most importantly it can run across different platforms,” Palladino says. “While we do work very well with Kubernetes, we also support… traditional legacy virtual machines or bare metal infrastructures. And the reason we do support both the modern and the old is that we’re working with enterprise organizations… [who] might be deploying new greenfield applications in Kubernetes or OpenShift but… still have a significant part of their software running in traditional virtual machines.” One of Kong’s strengths, Palladino suggests, is its pragmatism and the way in which the company is alive to their customer’s respective levels of technological maturity. “I’m very proud to say we’re a very pragmatic company. While we do work with developers to make sure that Kong is a leader in what we do in microservices and traditional API management, we’re also very pragmatic - we understand that’s the end goal, it's not necessarily the current state of affairs in our enterprise organizations.” Read next: It’s Black Friday: But what’s the business (and developer) cost of downtime? “We’re not just a vendor. We don’t give you the platform and then let you figure it out. We want to be a strategic technology partner with our customers.” Kong sees itself as a 'strategic technology partner' However, while every organization has its own timeline when it comes to technology, its CTO describes Kong as a platform that is paving a way for the future rather than simply catering to the needs of its customers. “We’re not an industry follower, we’re an industry leader,” says Palladino. “We’re looking at these large scale systems that organizations are creating and we’re thinking how can we make that better from a security standpoint, from a discoverability standpoint, from a documentation standpoint?” This isn’t just Silicon Valley posturing. As the software world moves toward cloud and microservices, the landscape shifts at a much faster rate. That makes it essential for organizations like Kong to pave the way for the future rather than react to the needs and demands of their customers. In turn, this means the scope of Kong’s work is growing. “We’re not just a vendor. We don’t give you the platform and then let you figure it out. We want to be a strategic technology partner with our customers,” says Palladino. “We engage with them, not just from a low-level standpoint with the teams, but we also engage... from a higher level executive standpoint, because we want to enable not just the technology but the business itself to be successful.” This is something Palladino is well aware of. Kong’s customers aren’t, after all, needlessly indulging in “an exercise in adopting new technologies,” but are rather doing so in response to business requirements. Having a more extensive relationship - or partnership, as Palladino puts it - ensures that digital transformation is both impactful and relatively risk free. "You simply can’t afford to have a black box at the center of your infrastructure. You need to know what’s happening and how services are interacting with one another - the way of achieving this is through open source software." Open source and the rise of bottom-up software adoption However, although Kong positions itself as a company attuned to the business needs of their customers, it’s also clear that it understands the developer’s power in today’s technology ecosystem. Palladino sees open source as playing a big part in this. And as an open core platform, Kong is able to build a community of creative and innovative developers around the wider product ecosystem. But Palladino is also keen to point out that you can’t really separate open source and the API and microservices revolutions. “10 years ago APIs used to be a nice-to-have” Palladino says. The approach was, he explains, little more than a kind of curiosity or a desire for a small community around a platform: “let’s open up some APIs, let’s open up this monolithic black box and see what happens.” However, “it’s not like that any more." If “APIs are the core business of every organization,” as Palladino puts it to me, “then you simply can’t afford to have a black box at the center of your infrastructure. You need to know what’s happening and how services are interacting with one another - the way of achieving this is through open source software.” “When we look at the microservices transition, we look at Docker, we look at Kubernetes, we look at Elastic, we look at Zipkin… Kafka… Kong, what’s the baseline? Open source. Each one of these products is open source at their core. Open source is driving this new transformation within the enterprise,” says Palladino. Palladino continues on this, offering a compelling narrative of why open source has become the dominant form of software. He begins with the problems posed by traditional central IT, “an ivory tower far from the business, far from real usage” which consequently “were not able to iterate fast enough to be able to answer those business requirements.” “The teams building the apps were closer to the business, closer to the customer, and they had to pick the right solution in order to be successful. And so what these… teams did was to go into self-service ecosystems - like... CNCF [Cloud Native Computing Foundation] - and pick and choose open source technologies they could adopt without having to go through an enterprise process… that’s why open source became more important - because it allowed them to be in production and get business value without having to deal with the bureaucracy of central IT - so it’s a bottom-up adoption from the teams all the way up as opposed from central IT to all the teams.” Developer freedom and organizational control Palladino refers to ‘bottom-up’ adoption a number of times throughout our conversation. He argues that it’s an industry shift that has been initiated by microservices. “With the emergence of microservices something happened in the industry - software, is not being sold top down anymore as much as it used to be - it’s more bottom-up adoption.” He also explains that having an open source element to the Kong offering is actually helping the company to grow. It’s a useful onboarding route. “Sometimes - often actually - Kong is being adopted just because the organization happens to already be running Kong in production, and you need enterprise features and enterprise support,” says Palladino. But while developer power seems to be part of this new post-central IT world, it also makes Kong even more valuable for those in leadership positions. Taking the example of multi-cloud, Palladino explains saying that “it’s very rare to see a CIO saying we would like to be multi cloud. Sometimes it happens, [but] most likely the organization is already multi-cloud because it naturally evolved to be multi-cloud. Different teams, different products using different clouds, different services.” With the wealth of tools, platforms and environments being used by forward-thinking developers trying to solve the problems in their immediate vicinity, it makes sense that the “C-Level Executives” who express an interest in Kong are looking for “a way to consolidate and standardize how their APIs and microservices are being managed and secured across multiple clouds, across multiple platforms.” A big concern for the leadership of the top Global 5000 organizations we’re working with… [is] making sure they can consolidate how security is being done, how monitoring is being done, how observability and enablement is being done across multiple clouds,” Palladino says. Read next: Honeycomb CEO Charity Majors discusses observability and dealing with “the coming armageddon of complexity” [Interview] The future of Kong and API management The future for Kong looks bright. The two new features released by the platform - Kong Brain and Kong Immunity - launched earlier this year, signal what the broader trends might be in the software infrastructure and systems engineering space. Both are backed by artificial intelligence, and provide incredibly cutting edge ways to manage the reliability and security of the services inside your infrastructure. Kong Brain, Palladino explains, lets you “listen to… runtime traffic to auto generate documentation for APIs… services, and monoliths” that organizations have no visibility on “after 20 years of running them in production.” Essentially then, it’s a tool that will prove incredibly useful in working with legacy software; it will certainly encourage the ‘lift and shift’ mentality that we’re starting to see emerge. Kong Immunity, meanwhile, is a security tool that uses machine learning to detect anomalies in traffic - allowing users to identify security threats and breaches within their system. “Traditional web application firewalls… don’t work within east-west traffic [server to server],” Palladino says. “They work perhaps in north-south traffic [client to server], but they’re slow, they’re very heavy weight.” Kong, then “tries to take away that security concern by providing a machine learning platform that can asynchronously, with no performance loss, learn from existing traffic across every version of every microservice.” With releases like these, it’s hard to dispute Palladino’s assertion that Kong is indeed an ‘industry leader.’ However, as Palladino also appears to be aware of, to be truly successful, it’s not enough to just lead the industry - you have to make sure you can bring people with you. Learn more about Kong here, and follow Marco Palladino on Twitter.
Read more
  • 0
  • 0
  • 6019

article-image-cybersecurity-researcher-elliot-alderson-talks-trump-and-facebook-google-and-huawei-and-teaching-kids-online-privacy-podcast
Richard Gall
08 Aug 2019
3 min read
Save for later

Cybersecurity researcher "Elliot Alderson" talks Trump and Facebook, Google and Huawei, and teaching kids online privacy [Podcast]

Richard Gall
08 Aug 2019
3 min read
For anyone that's watched Mr. Robot, the name Elliot Alderson will sound familiar. However, we're not talking about Rami Malek's hacker alter ego - instead, the name has been adopted as an alias by a real-life white-hat hacker who has been digging into the dark corners of the wild and often insecure web. Elliot's real name is Baptiste Robert (whisper it...) - he was kind enough to let us peak beneath the pseudonym, and spoke to us about his work as a cybersecurity researcher and what he sees as the biggest challenges in software security today. Listen: https://soundcloud.com/packt-podcasts/cybersecurity-researcher-elliot-alderson-on-fighting-the-good-fight-online "Elliot Alderson" on cybersecurity, politics, and regulation In the episode we discuss a huge range of topics, including: Security and global politics Is it evolving the type of politics we have? Is it eroding trust in established institutions? Google’s decision to remove its apps from Huawei devices The role of states and the role of corporations Who is accountable? Who should we trust? Regulation Technological solutions What Elliot Alderson has to say on the podcast episode... On Donald Trump's use of Facebook in the 2016 presidential election: “We saw that social networks have an impact on elections. Donald Trump was able to win the election because of Facebook - because he was very aggressive on Facebook and able to target a lot of people…”  On foreign interference in national elections: “We saw, also, that these tools… have been used by countries… in order to manipulate the elections of another country. So as a technician, as a security researcher, as an infosec professional, you need to ask yourself what is happening - can we do something against that? Can we create some tool? Can we fight this phenomenon?” How technology professionals and governing institutions should work together: “We should be together. This is the responsibility of government and countries to find vulnerabilities and to ensure the security of products used by its citizens - but it’s also the responsibility of infosec professionals and we need to work closely with governments to be sure that nobody abuses vulnerabilities out there…” On teaching the younger generation about privacy and protecting your data online: “I think government and countries should teach young people the value of personal data… personally, as a dad, this is something I’m trying to teach my kids - and say okay, this website is asking you your personal address, your personal number, but do they need it? ...In a lot of cases the answer is quite obvious: no, they don’t need it.” On Google banning Huawei: “My issue with the Huawei story and the Huawei ban is that as a user, as a citizen, we are only seeing the consequences. Okay, Google ban Huawei - Huawei is not able to use Google services. But we don’t have the technical information behind that.” On the the importance of engineering ethics: “If your boss is coming to you and saying ‘I would like to have an application which is tracking people during their day to day work’ what is your decision? As developers, we need to say ‘no: this is not okay. I will not do this kind of thing’”. Read next: Doteveryone report claims the absence of ethical frameworks and support mechanisms could lead to a ‘brain drain’ in the U.K. tech industry Follow Elliot Alderson on Twitter: @fs0c131y
Read more
  • 0
  • 0
  • 6015
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €18.99/month. Cancel anytime
article-image-interview-with-marco-matic-marco-ryan-augmented-reality-artist
Sugandha Lahoti
18 Sep 2018
11 min read
Save for later

“As Artists we should be constantly evolving our technical skills and thought processes to push the boundaries on what's achievable,” Marco Matic Ryan, Augmented Reality Artist

Sugandha Lahoti
18 Sep 2018
11 min read
Augmented and Virtual Reality is taking on the world, one AR app at a time. Almost all tech giants have their own AR platforms, whether it be Google’s ARCore, to Apple’s ARKit, to Snapchat’s Lens Studio. Not just that, there are now various AR galleries and exhibits focused solely on AR, where designers and artists showcase their artistic talents combined with rich augmented reality experiences. While searching for designers and artists working the augmented reality space, we stumbled upon Marc-O-Matic aka Marco Ryan. We were fascinated by his artwork and wanted to interview him right away.   [embed]https://vimeo.com/219045989[/embed] As his website bio says, “Marco is a multidisciplinary Artist, Animator, Director, Storyteller and Technologist working across Augmented and Virtual Reality technologies.” He shared with us his experiences working with Augmented Reality and told us about his creation process, his current projects, tips and tricks for a budding artist venturing into the AR space and his views on merging technology and art. Key Takeaways A well-rounded creative is someone who could do everything from creating the art, the associated stories as well as execute the technical aspects of the work to make it come to life. The future belongs to these creative types who love to learn and experiment every day. An artist must consider three aspects before taking on a project. First, how to tell their story in a more engaging and interesting manner. Second, how to combine different skill sets together and third the message their work conveys. Augmented Reality has added a new level of depth to the art experience not just for artists but also for viewers. Everyone has a smartphone these days and with AR you can add many added elements to an art piece. You can add sound, motion and 3D elements to the experience, which affect more of your senses. It is easy for a beginner artist to get started with creating Augmented reality art. You need to start with learning a basic language (mostly C# or Javascript) and then you can learn more as you explore. Tools and platforms such as Adobe After Effects, Maya and Blend, are good for building shaders, materials or effects. Unity 3D is a popular choice for implementing the AR functionality, and over 91% of HoloLens experiences are made with Unity. It supports a large number of devices and also has a large community base. Unity offers highly optimized rendering pipeline and rapid iteration capabilities to create realist AR VR experiences. AI and Machine Learning are at the forefront of innovating AR/VR. Machine learning algorithms can be used in character modeling. They can map your facial movements directly to a 3D character’s face. AI can make a character emotionally react based on the tone of the audio and also automate tasks allowing an artist to focus solely on the creative aspects. Full Interview How did you get started in AR. How did you journey from being an artist to an AR artist began? I started experimenting with Immersive AR/VR tech about 2-3 years ago? At the time buzzwords like virtual and augmented reality started trending and it had me curious to see how I could implement my own existing skills into this area at the time. Prior to getting into AR and VR I came from a largely self-taught background in Art & Illustration which then evolved into exploring film and animation and then finally led to game design and programming. Exploring all these areas allowed me to become a well-rounded creative, meaning I could do everything from creating the art, the associated stories as well as execute the technical aspects of the work to make it come to life. That’s quite impressive and fascinating to know! What programming language did you learn first and why? What was your approach to learning the language? Do you also dabble in other languages? How can artists overcome the fear of learning technology? Working with the game engine Unity 3D to create my VR and AR Experiences, the engine allows you to program in either C# or Javascript. C# was the the most prevalent language of the two and for me I found it easier to pick up. As a visual learner, I initially came to understand the language through node based programming tools like Playmaker and Uscript. These plug-ins for Unity are great for beginners as they allow you to visually program behaviors and functionality into your experience by creating interconnected node trees or maps which then generates the code. I’m familiar with this form of logic building as other programs I use, such as Adobe After Effects, Maya and Blend, use similar systems for building shaders, materials or effects. Through node based programming or visual scripting, you can take a look under the hood and understand the syntax iteratively. It’s like understanding a language through reverse-engineering. By seeing the code being generated based on the node maps you create you could quickly understand how the language is structured. I try to think of learning new things as adding skills to your arsenal and the more skills you possess the greater you can expand on your ideas. I don’t think it’s so much a fear of learning new technology. I think it’s more a question of ‘Will learning this new technology be worth my time and benefit my current practice?’ The beautiful age we live in makes it so much easier to learn new skills online. So there’s so much support already available for anyone wanting to explore game design technologies in their creative practice. What tools do you use in the process and why those tools? What is the actual process you follow? What are the various steps? The Augmented Reality Art I created has two parts to them: Creating the Art Everything is drawn on paper first using your typical office ballpoint pens, inkwash and sometimes watercolours or gouache. I prefer creating my work that way. To me keeping that 'hand-crafted' style and aesthetic has its charm when bringing it to life through Augmented Reality. I think it's really important to demonstrate the importance of other traditional mediums, techniques and disciplines when working with immersive technologies as they're still very valid in creating engaging content. I see a lot of AR experiences that have a lack of charm and feel somewhat 'gamey' which is why I want to continue integrating more raw-looking aesthetics in this area. Augmented Representation & Implementation Once the art is planned and created, I then scan it and start splicing it up digitally. From the preserved textures and linework I then create a 3D representation of the artwork through 3D modeling and animation software called Blender. It's one of many 3D authoring tools out there. It not only packs a significant number of features, it's also free which makes it ideal for beginners and students. Once the 3D representation of the work is created it's then imported into a game engine called Unity3D where I implement the AR functionality. Unity 3D is a widely used game engine. It's one of many engines out there but what makes it great is its support to deploy to all manners of devices. It also has a large community base behind it should you need help. How long does a typical project take? Do you work by yourself or with a team? How long do you work? On average an Augmented Artwork may take anywhere from 3 to 4 weeks to create, which includes the Art, Animation, Modeling and AR Implementation. When I first started out it'd take much longer but overtime I've streamlined my own processes to get things done faster. My alias Marc-O-Matic tends to be mistaken as a team but I'm actually just one person. I much prefer being able to create and direct everything myself. What is your advice to a budding artist who is interested in learning tech to produce art? From my experience, don't confine yourself to one specific medium and try practising a different skill or technique everyday. As artists we should be constantly evolving not just our technical skills but also our thought processes to push the boundaries on what's achievable. 'How can I tell my story in a more engaging and interesting manner?' 'What can I create if I combine these different skill sets together?" "How do I bring an end to systematic racism and white privilege?' etc. The more knowledge you have the greater you can push the boundaries of your creative expression. How do you support yourself financially? As in do you also sell your art pieces or make art for clients etc.? Where do you sell stuff, and meet people? I work under my own Artist Entity 'Marc-O-Matic.' I much prefer working independently than within larger studios or agencies. I juggle a balance between commercial work and personal projects. There's generally a high demand for Augmented/Virtual Reality experiences and I'm really lucky and grateful that clients want content in my particular style. I work with a variety of organisations generally consulting, providing creative direction and in most cases building the experiences too. Aside from the commercial work, I'm currently touring my own Augmented Reality Art/Storytelling collection called 'Moving Marvels' where audiences get to see my illustrated works come to life right in front of them. It's also how I sell various limited edition prints of my augmented artworks. My collection is exhibited at tech/innovation conferences, symposiums, galleries and even at universities/educational institutes. It's a great way to make connections and demonstrate what immersive technologies can do in a creative capacity. No interview would be complete without a discussion on AI and machine learning. As a technophile artist, are you excited about merging AI and art in new ways? What are some of your hopes and fears about this new tech? Do you plan to try machine learning anytime soon? It’s like a double edged sword with + 5 to Creativity. Technology can enhance our creative abilities and processes in a number of ways. At the same time it can also make some of us lazy because we can become so reliant on it. Like any new technology that aims to assist in the creative process, there’s always the fear that technology will make creatives lazy or will replace professions altogether. In some ways technology has done this but from saying that it has also created new job opportunities and abilities for artists. In areas like character animation for example the assistance of machine learning algorithms means creatives can worry less about the laborious physical processes of rigging and complex animation and focus more on the storytelling. For example, we can significantly reduce production times in the areas of facial rigging through facial recognition. Through learning the behaviour and structure of your own face, machine learning algorithms can map your facial movements directly to a 3D character’s face. By simply recording your own facial movements and gestures, you’ve created an entire impression map for your 3D character to use. What’s also crazy is, on top of that, by running a voice over track on that 3D character, you can also train it to play it out to sync with the voice AS well as have the entire face emotionally react based on the tone of the audio. It’s game-changing but also terrifying stuff as this sort of technology can be used to create highly realistic fake impressions of real people. As also an animator, I’ve started experimenting with this technology for my own VR animations. What would take me animator hours or even days to animate a talking 3D character can now take mere minutes. Author Bio Marc-O-Matic is the moniker of Marco Matic Ryan. He is a multidisciplinary Artist, Animator, Director, Storyteller and Technologist working across Augmented and Virtual Reality technologies. He is based in Victoria, Australia. https://player.vimeo.com/video/125218939 Unity plugins for augmented reality application development. Google ARCore is pushing immersive computing forward. There’s another player in the advertising game: augmented reality.
Read more
  • 0
  • 0
  • 5971

article-image-why-choose-ibm-spss-statistics-r
Amey Varangaonkar
22 Dec 2017
9 min read
Save for later

Why choose IBM SPSS Statistics over R for your data analysis project

Amey Varangaonkar
22 Dec 2017
9 min read
Data analysis plays a vital role in organizations today. It enables effective decision-making by addressing fundamental business questions based on the understanding of the available data. While there are tons of open source and enterprise tools for conducting data analysis, IBM SPSS Statistics has emerged as a popular tool among statistical analysts and researchers. It offers them the perfect platform to quickly perform data exploration and analysis, and share their findings with ease. [author title=""]  Dr. Kenneth Stehlik-Barry Kenneth joined SPSS as Manager of Training in 1980 after using SPSS for his own research for several years. He has used SPSS extensively to analyze and discover valuable patterns that can be used to address pertinent business issues. He received his PhD in Political Science from Northwestern University and currently teaches in the Masters of Science in Predictive Analytics program there. Anthony J. Babinec Anthony joined SPSS as a Statistician in 1978 after assisting Norman Nie, the founder of SPSS, at the University of Chicago. Anthony has led a business development effort to find products implementing technologies such as CHAID decision trees and neural networks. Anthony received his BA and MA in Sociology with a specialization in Advanced Statistics from the University of Chicago and is on the Board of Directors of the Chicago Chapter of the American Statistical Association, where he has served in different positions including the President. [/author] In this interview, we take a look at the world of statistical data analysis and see how IBM SPSS Statistics makes it easier to derive business sense from data. Kenneth and Anthony also walk us through their recently published book - Data Analysis with IBM SPSS Statistics - and tell us how it benefits aspiring data analysts and statistical researchers. Key Takeaways - IBM SPSS Statistics IBM SPSS Statistics is a key offering of IBM Analytics - providing an integrated interface for statistical analysis on-premise and on the cloud SPSS Statistics is a self-sufficient tool - it does not require you to have any knowledge of SQL or any other scripting language SPSS Statistics helps you avoid the 3 most common pitfalls in data analysis, i.e. handling missing data, choosing the best statistical method for analysis and understanding the results of the analysis R and Python are not direct competitors to SPSS Statistics - instead, you can create customized solutions by integrating SPSS Statistics with these tools for effective analyses and visualization Data Analysis with IBM SPSS Statistics highlights various popular statistical techniques to the readers, and how to use them in order to gather useful hidden insights from their data Full Interview IBM SPSS Statistics is a popular tool for efficient statistical analysis. What do you think are the 3 notable features of SPSS Statistics that make it stand apart from the other tools available out there? SPSS Statistics has a very short learning curve which makes it ideal for analysts to use efficiently. It also has a very comprehensive set of statistical capabilities so virtually everything a researcher would ever need is encompassed in a single application. Finally, SPSS Statistics provides a wealth of features for preparing and managing data so it is not necessary to master SQL or another database language to address data-related tasks. With over 20 years of experience in this field, you have a solid understanding of the subject and, equally, of SPSS Statistics. How do you use the tool in your work? How does it simplify your day to day tasks related to data analysis? I have used SPSS Statistics in my work with SPSS and IBM clients over the years. In addition, I use SPSS for my own research analysis. It allows me to make good use of my time whether I'm serving clients or doing my own analysis because of the breadth of capabilities available within this one program. The fact that SPSS produces presentation-ready output further simplifies things for me since I can collect key results as I work and put them into a draft report and share them as required. What are the prerequisites to use SPSS Statistics effectively? For someone who intends to use SPSS Statistics for their data analysis tasks, how steep is the curve when it comes to mastering the tool? It certainly helps to have a understanding of basic statistics when you begin to use SPSS Statistics but it can be a valuable tool even with a limited background in statistics. The learning curve is a very "gentle slope" when it comes to acquiring sufficient familiarity with SPSS Statistics to use it very effectively. Mastering the software does involve more time and effort but one can accomplish this over time as one builds on the initial knowledge that comes fairly easily. The good news is that one can obtain a lot of value from the software well before one truly masters it by discovering the many features.   What are some of the common problems in data analysis? How does this book help the readers overcome them? Some of the most common pitfalls encountered when analyzing data involve handling missing/incomplete data, deciding which statistical method(s) to employ and understanding the results. In the book, we go into the details of detecting and addressing data issues including missing data. We also describe what each statistical technique provides and when it is most appropriate to use each of them. There are numerous examples of SPSS Statistics output and how the results can be used to assess whether a meaningful pattern exists. In the context of all the above, how does your book Data Analysis with IBM SPSS Statistics help readers in their statistical analysis journey? What, according to you, are the 3 key takeaways for the readers from this book? The approach we took with our book was to share with readers the most straightforward ways to use SPSS Statistics to quickly obtain the results needed to effectively conduct data analysis. We did this by showing the best way to proceed when it comes to analyzing data and then showing how this process can be done best in the software. The key takeaways from our book are the way to approach the discovery process when analyzing data, how to find hidden patterns present in the data and what to look for in the results provided by the statistical techniques covered in the book.   IBM SPSS Statistics 25 was released recently. What are the major improvements or features introduced in this version? How do these features help the analysts and researchers? There are a lot of interesting new features introduced in SPSS Statistics 25. For starters, you can copy charts as Microsoft Graphic Objects, which allows you to manipulate charts in Microsoft Office. There are changes to the chart editor that make it easier to customize colors, borders, and grid line settings in charts. Most importantly, it allows the implementation of Bayesian statistical methods. Bayesian statistical methods enable the researcher to incorporate prior knowledge and assumptions about model parameters. This facility looks like a good teaching tool for Statistical Educators. Data visualization goes a long way in helping decision-makers get an accurate sense of their data. How does SPSS Statistics help them in this regard? Kenneth: Data visualization is very helpful when it comes to communicating findings to a broader audience and we spend time in the book describing when and how to create useful graphics to use for this purpose. Graphical examination of the data can also provide clues regarding data issues and hidden patterns that warrant deeper exploration. These topics are also covered in the book. Anthony: SPSS Statistics’ data visualizations capabilities are excellent. The menu system makes it easy to generate common chart types. You can develop customized looks and save them as a template to be applied to future charts. Underlying SPSS Graphics is an influential approach called the Grammar of Graphics. The SPSS graphics capabilities are embodied in a versatile syntax called Graphics Programming Language. Do you foresee SPSS Statistics facing stiff competition from open source alternatives in the near future? What is the current sentiment in the SPSS community regarding these topics? Kenneth: Open source tools based alternatives such as Python and R are potential competition for SPSS Statistics but I would argue otherwise. These tools, while powerful, have a much steeper learning curve and will prove difficult for subject matter experts that periodically need to analyze data. SPSS is ideally suited for these periodic analysts whose main expertise lies in their field which could be healthcare, law enforcement, education, human resources, marketing, etc. Anthony: The open source programs have a lot of capability but they are also fairly low-level languages, so you must learn to code. The learning curve is steep, and there are many maintainability issues. R has 2 major releases a year. You can have a situation where the data and commands remain the same, but the result changes when you update R. There are many dependencies among R packages. R has many contributors and is an avenue for getting your hands on new methods. However, there is a wide variance in the quality of the contributors and contributed packages. The occasional user of SPSS has an easier time jumping back in than does the occasional user of open source software. Most importantly, it is easier to employ SPSS in production settings. SPSS Statistics supports custom analytical solutions through integration with R and Python. Is this an intent from IBM to join hands with the open source community? This is a good follow-up question to the one asked before. Actually, the integration with R and Python allows SPSS Statistics to be extended to accommodate a situation in which an analyst wishes to try an algorithm or graphical technique not directly available in the software but which is supported in one of these languages. It also allows those familiar with R or Python to use SPSS Statistics as their platform and take advantage of all the built-in features it comes with, out of the box while still having the option to employ these other languages where they provide additional value. Lastly, this book is designed for analysts and researchers who want to get meaningful insights from their data as quickly as possible. How does this book help them in this regard? SPSS Statistics does make it possible to very quickly pull in data and get insightful results. This book is designed to streamline the steps involved in getting this done while also pointing out some of the less obvious "hidden gems" that we have discovered during the decades of using SPSS in virtually every possible situation.
Read more
  • 0
  • 0
  • 5926

article-image-mastering-midjourney-ai-world-for-design-success
Margarida Barreto
21 Nov 2024
15 min read
Save for later

Mastering Midjourney AI World for Design Success

Margarida Barreto
21 Nov 2024
15 min read
IntroductionIn today’s rapidly shifting world of design and trends, artificial intelligence (AI) has become a reality! It’s now a creative partner that helps designers and creative minds go further and stand out from the competition. One of the leading AI tools revolutionizing the design process is Midjourney. Whether you’re an experienced professional or a curious beginner, mastering this tool can enhance your creative workflow and open up new possibilities for branding, advertising, and personal projects. In this article, we’ll explore how AI can act as a brainstorming partner, help overcome creative blocks, and provide insights into best practices for unlocking its full potential. Using AI as my creative colleague AI tools like Midjourney have the potential to become more than just assistants; they can function as creative collaborators. Often, as designers, we hit roadblocks—times when ideas run dry, or creative fatigue sets in. This is where Midjourney steps in, acting as a colleague who is always available for brainstorming. By generating multiple variations of an idea, it can inspire new directions or unlock solutions that may not have been immediately apparent. The beauty of AI lies in its ability to combine data insights with creative freedom. Midjourney, for instance, uses text prompts to generate visuals that help spark creativity. Whether you’re building moodboards, conceptualizing ad campaigns, or creating a specific portfolio of images, the tool’s vast generative capabilities enable you to break free from mental blocks and jumpstart new ideas. Best practices and trends in AI for creative workflows While AI offers incredible creative opportunities, mastering tools like Midjourney requires understanding its potential and limits. A key practice for success with AI is knowing how to use prompts effectively. Midjourney allows users to guide the AI with text descriptions or just image input, and the more you fine-tune those prompts, the closer the output aligns with your vision. Understanding the nuances of these prompts—from image weights to blending modes—enables you to achieve optimal results. A significant trend in AI design is the combination of multiple tools. MidJourney is powerful, but it’s not a one-stop solution. The best results often come from integrating other third-party tools like Kling.ai or Gen 3 Runway. These complementary tools help refine the output, bringing it to a professional level. For instance, Midjourney might generate the base image, but tools like Kling.ai could animate that image, creating dynamic visuals perfect for social media or advertising. Additionally, staying up to date with AI updates and model improvements is crucial. Midjourney regularly releases new versions that bring refined features and enhancements. Learning how these updates impact your workflow is a valuable skill, as mastering earlier versions helps build a deeper understanding of the tool’s evolution and future potential. The book, The Midjourney Expedition, dives into these aspects, offering both beginners and advanced users a guide to mastering each version of the tool. Overcoming creative blocks and boosting productivity One of the most exciting aspects of using AI in design is its ability to alleviate creative fatigue. When you’ve been working on a project for hours or days, it’s easy to feel stuck. Here’s an example of how AI helped me when I needed to create a mockup for a client’s campaign. I wasn’t finding suitable mockups on regular stock photo sites, so I decided to create my own.  I went to the MidJourney website: www.midjourney.com  Logged in using my Discord or Google account.  Go to Create (step 1 in the image below), enter the prompt (3D rendering of a blank vertical lightbox in front of a wall of a modern building. Outdoor advertising mockup template, front view) in the text box ( step 2), click on the icon on the right (step 3) to open the settings box (step 4) change any settings you want. In this case, lets keep it with the default settings, I just adjusted the settings to make the image landscape-oriented and pressed enter on my keyboard. 4 images will appear, choose the one you like the most or rerun the job, until you fell happy with the result.  I got my image, but now I need to add the advertisement I had previously generated on Midjourney, so I can present to my client some ideas for the final mockup. Lets click on the image to enlarge it and get more options. On the bottom of the page lets click on Editor In Editor mode and with the erase tool selected, erase the inside of the billboard frame, next copy the URL of the image you want to use as a reference to be inserted in the billboard, and edit your prompt to: https://cdn.midjourney.com/urloftheimage.png  3D rendering of a, Fashion cover of "VOGUE" magazine, a beautiful girl in a yellow coat and sunglasses against a blue background inside the frame, vertical digital billboard mockup in front of a modern building with a white wall at night. Glowing light inside the frame., in high resolution and high quality. And press Submit.  This is the final result.  In case you master any editing tool, you can skip this last step and personalize the mockup, for instance, in Photoshop. This is just one example of how AI saved me time and allowed me to create a custom mockup for my client. For many designers, MidJourney serves as another creative tool, always fresh with new perspectives, and helping unlock ideas we hadn’t considered. Moreover, AI can save hours of work. It allows designers to skip repetitive tasks, such as creating multiple iterations of mockups or ad layouts. By automating these processes, creatives can focus on refining their work and ensuring that the main visual content serves a purpose beyond aesthetics. The challenges of writing about a rapidly evolving tool Writing The Midjourney Expedition was a unique challenge because I was documenting a technology that evolves daily. AI design tools like Midjourney are constantly being updated, with new versions offering improved features and refined models. As I wrote the book, I found myself not only learning about the tool but also integrating the latest advancements as they occurred. One of the most interesting parts was revisiting the older versions of MidJourney. These models, once groundbreaking, now seem like relics, yet they offer valuable insights into how far the technology has come. Writing about these early versions gave me a sense of nostalgia, but it also highlighted the rapid progress in AI. The same principles that amazed us two years ago have been drastically improved, allowing us to create more accurate and visually stunning images. The book is not just about creating beautiful images, it’s about practical applications. As a communication designer, I’ve always focused on using AI to solve real-world problems, whether for branding, advertising, or storytelling. And I find Midjourney to be a powerful solution for any creative who need to go one step further in a effective way. Conclusion AI is not the future of design, it’s already here! While I don’t believe AI will replace creatives, any creator who masters these tools may replace those who don’t use them. Tools like Midjourney are transforming how we approach creative workflows and even final outcomes, enabling designers to collaborate with AI, overcome creative blocks, and produce better results faster. Whether you're new to AI or an experienced user, mastering these tools can unlock new opportunities for both personal and professional projects. By combining Midjourney with other creative tools, you can push your designs further, ensuring that AI serves as a valuable resource for your creative tasks. Unlock the full potential of AI in your creative workflows with "The Midjourney Expedition". This book is for creative professionals looking to leverage Midjourney. You’ll learn how to produce stunning AI art, streamline your creative process, and incorporate AI into your work, all while gaining a competitive edge in your industry.Author BioMargarida Barreto is a seasoned communication designer with over 20 years of experience in the industry. As the author of The Midjourney Expedition, she empowers creatives to explore the full potential of AI in their workflows. Margarida specializes in integrating AI tools like Midjourney into branding, advertising, and design, helping professionals overcome creative challenges and achieve outstanding results. 
Read more
  • 0
  • 0
  • 5919

article-image-why-switch-to-angular-for-web-development
Amarabha Banerjee
09 Apr 2018
10 min read
Save for later

Why switch to Angular for web development - Interview with Minko Gechev

Amarabha Banerjee
09 Apr 2018
10 min read
Angular is one of the most popular JavaScript frameworks. Today, it's jostling for position with React and Vue as the leading front-end development framework. But if you're not using it, you're probably wondering why you should switch to Angular at all. What makes Angular such a good web development framework? And what does it give you that the likes of Vue and React don't? To help you decide if you should switch to Angular, we spoke to Minko Gechev, author of the third edition of Switching to Angular. Minko has worked incredibly closely with the Angular project - he was the author of the Angular 2 style guide. That means he's well placed to explain the benefits of Angular and to help you decide if its the right framework for you. Key Takeaways: Angular as against the public opinion has a consistent development lifecycle and is trustworthy for the developers to start their career with. The primary goal of the Angular community is to make the framework lighter and much more efficient. The book Switching to Angular - Third Edition works as a perfect stepping stone for new developers looking to start their journey with Angular. Angular.js and Angular are two different frameworks. Often people incorrectly refer to AngularJS as Angular version 1 which makes developers think that Angular is the next version of AngularJS. Although similar in nature, they should never be considered as predecessor and successor.   How has Angular come to dominate front-end development? What do you think is the primary reason for Angular to have the lion's share of front-end development market? Minko Gechev: There are several important reasons: Angular is a really well-designed framework which provides entire solution for the developers working on the user interface. The framework is developed and maintained by Google. This means that it is very well engineered and a high-quality solution. Angular provides great development experience. Since the beginning Angular was designed to simplify the development process dramatically. The framework provides friendly APIs which makes the process of building complex user interface trivial. Out of the box, Angular has very performant and well tested implementations of two-way data-binding, dependency injection and change detection. Notice that these will not only reduce the boilerplates in the process of application development but also make our code more testable, thanks to dependency injection. This means that the Angular team also thinks about improving the quality characteristics of our applications. On top of that Angular for web development provides a lot of external modules maintained by the Angular team itself, so the same team which works on Angular core, also develops Angular router, Angular forms, Angular material, etc. This allows us to confidently use the entire stack without bothering that the individual pieces of the puzzle may not fit well together. Another important fact is that the framework is developed by Google. This implies a high-quality engineering power for enterprises and individuals who can confidently bet on Angular as a long-term solution for development framework. As it is unlikely from Google to drop support for the framework unexpectedly. Keeping in mind that Angular is used in tonnes of projects inside Google itself, we can rest assured that it’s being tested on a very high-scale constantly. Google would never make a compromise in terms of quality. Development experience is something I am really interested in. The framework is developed with TypeScript. It provides great tooling support implemented in different text editors and IDEs; with precise auto completion suggestions and warning about typing errors; developers’ productivity can increase a lot. On top of that, Angular’s templates are very convenient for static analysis. This helps programmers to build tools which ease the development process even further. A few examples of such tools are codelyzer and the Angular Language Service. Finally, the framework provides predictable release schedule following semantic versioning. We always know what changes to expect in the next release to plan accordingly. What would you like to tell your followers about the latest development in the Angular eco-system? MG: Google is constantly making Angular for web development smaller and faster. Each new release introduces better tooling which drops more unused code from the bundles and makes them smaller. The Angular compiler also brings plenty of optimizations which transform our applications to better  performant versions of themselves. At the same time, we can see very minor incompatibility changes across major releases. The Angular team does an amazing job in making sure that they improve the framework a lot behind the surface but for us, the developers, they leave the APIs unchanged. On the other hand, we have the Angular community which builds on top of the core. There are plenty of community tools improving our day-to-day development process. Also, there are plenty of high quality material which help us to get started with the framework. Should beginners use Angular? Would it be advisable for a newbie JavaScript developer to start using Angular or would you suggest any other framework? MG: I’d definitely encourage beginner JavaScript developers to give Angular a try. They will build good habits by using TypeScript and have less bugs in their code because of the language’s type system. Also, Angular CLI is a great starting point for bootstrapping a new project - no matter whether you’re a beginner or an expert. What makes this book a must have for the would be Angular developers? MG: The book provides a gentle introduction to all the framework’s concepts. In the first chapters, we focus on ideas rather than digging into unfamiliar code. By the time we start writing applications, readers already have a good understanding of the framework foundation so they can just start coding without any struggle. As a former member of the Angular mobile team and long-term contributor, I have revealed implementation details which will help readers get in-depth knowledge about how everything works under the hood. From my teaching experience, I’ve noticed different patterns which help people develop a deeper understanding of new concepts and use them in practice. The main challenges of making the switch to Angular What are the main challenges that anyone would face while shifting to Angular framework? MG: There might be some conceptual overhead in the beginning. The framework encourages us to use best practices which means there might be a lot of new concepts, such as dependency injection, two-way data-binding, observables and Ahead-of-Time compilation. It takes time and practice to digest everything but the good thing is that we can do that incrementally. In the book I provide a step-by-step approach - we start developing fully functional applications with the bare minimum and after that increase our expressiveness and architecture by introducing new ideas and concepts. [box type="info" align="" class="" width=""]Check out this post to learn how to build components using Angular.[/box] Is Angular's dominance fragile? Angular is going through a lot of changes presently, do you feel that these many changes can impact the developer retainability of a particular tool or tech, especially when there are frameworks like React and Vue breathing down the neck of Angular? MG: The incorrect perception about fragility in the Angular’s API is a result of the rewrite of AngularJS. Often people incorrectly refer to AngularJS as Angular version 1 which makes developers think that Angular is the next version of AngularJS. This is incorrect. Angular and AngularJS are completely different frameworks, the only thing in common is that Angular is inspired by AngularJS. This incorrect statement also makes people think that the changes which will happen between Angular version 5 and Angular version 6, for instance, will be as impactful as the changes which happened between Angular version 2 and AngularJS. This is wrong as well. In fact, there were almost no backwards incompatible changes between Angular version 2 and Angular version 5 which means that the API is under strict control and the migration process between major releases is always only related to update of the project’s dependencies’ versions. Regarding the other libraries and frameworks out there such as React and Vue, I see that they are introducing a lot of new ideas as well. There’s competition which is very healthy since it allows the entire ecosystem to constantly evolve. 3 key features of Angular Please share three of the most remarkable features that you feel makes Angular special. Why should people switch to Angular? The dependency injection mechanism of the framework which makes our code testable and enforces better separation of concerns. I’ve had to use other alternative technologies in the past and I can honestly say that this is the feature I missed a lot. The Angular compiler is another remarkable feature. It transforms our application to much more performant versions of itself. Finally, I like the Angular templates. They have plenty of good features and thanks to the great design the Angular compiler is ableto perform a variety of optimizations over our applications. Could another framework replace Angular? Do you in the near future see any framework replacing Angular in popularity and if yes then why? MG: The JavaScript landscape has been very dynamic for the past couple of years. Although Vue is built by independent developers, recently there’s a noticeable focus on frameworks developed and maintained by large companies, such as React and Angular. I think this trend will stay because it’s much safer to bet the future of your product on a framework supported by a big organization. This means it is less likely to have unexpectedly discontinued maintenance and support. With all the qualities that Angular has, I believe that the framework has a very bright future. It has been adopted by a lot of large companies - Microsoft, Capital One, NBA, VMware, Google and many others. [box type="info" align="" class="" width=""]Check out this post on 5 web development tools that will matter in 2018[/box] When Angular made the breaking change to version 2, everyone was divided in their opinion about whether it was the right step, what was your thought? MG: As I mentioned above, Google didn’t introduce breaking changes to version 2 - they created a brand new framework inspired by AngularJS. It’s incorrect to refer to Angular as Angular 2, Angular 4, or Angular 5, etc. It’s just Angular. There’s a new major release of the framework every 6 months. At the moment we’re at version 5; 5 months from now Angular will reach version 6, etc. The differences between Angular version 2 and Angular version 5 are insignificant, there were very minimal backwards incompatible changes. It’s pointless to refer to every new version of Angular by placing a number behind unless we’re describing any specific change which happened across releases. In “Switching to Angular” I’ve focused on the solid foundation of the framework which is unlikely to change in near future. At ng-conf 2017 I introduced a tool which can automatically migrate any Angular application between versions 2 and 4 which simplified the transition process even further. On the other hand, AngularJS (which is incorrectly called Angular 1) is still in use, although Google strongly encourages developers to migrate to Angular. How is this book a stepping stone for a new Angular web developer and what should they try their hands on next? MG: “Switching to Angular” provides a great starting point for any intermediate JavaScript developer who wants to learn Angular. In case the reader has prior experience in AngularJS the transition of their mindset from AngularJS to Angular will be extremely smooth because of the comparison between the two frameworks that is provided. The book also explains in detail what should be the expectations of the readers regarding the future of the framework. If you are interested to get started with Angular javascript framework Switching to Angular - Third Edition is the go-to book to align with.  
Read more
  • 0
  • 0
  • 5883
article-image-listen-herman-fung-explains-what-its-like-to-manage-programmers-and-software-engineers-podcast
Richard Gall
13 Nov 2019
2 min read
Save for later

Listen: Herman Fung explains what its like to manage programmers and software engineers [Podcast]

Richard Gall
13 Nov 2019
2 min read
Management is a discipline that isn't short of coverage. In fact, it's probably fair to say that the world throws too much attention its way. This only serves to muddy the waters of management principles and make it hard to determine what really matters. To complicate things, technology is ripping up the rule book when it comes to processes and hierarchies. Programming and engineering are forcing management gurus to rethink what it means to 'manage' today. However, while the wealth of perspectives on modern management amount to a bit of a cacophony, looking specifically at what it means to be a manager in a world defined by software can be useful. That's why, in the latest episode of the Packt Podcast, we spoke to Herman Fung. Herman is someone with both development and management experience, and, following the publication of his book The Successful Software Manager earlier this year, he's been spending a lot of time seriously considering what it means to be a software manager. Listen to the podcast episode: https://soundcloud.com/packt-podcasts/what-does-it-mean-to-be-a-software-manager-herman-fung-explains Some of the topics covered in this episode include: How to approach software management if you haven't done it before The impact of Agile and DevOps What makes managing in the context of engineering and technology different from other domains The differences between leadership and management You can buy The Successful Software Manager from the Packt store as a print or eBook. Click here. Follow Herman on Twitter: @FUNG14
Read more
  • 0
  • 0
  • 5858

article-image-if-tech-building-future-make-inclusive-representative-all-society-interview-charlotte-jee
Packt
27 Feb 2018
5 min read
Save for later

‘If tech is building the future, let's make that future inclusive and representative of all of society’ – An interview with Charlotte Jee

Packt
27 Feb 2018
5 min read
Charlotte Jee is a journalist specialising in technology and politics, currently working as editor for Techworld. Charlotte has founded, hosted and moderated a range of events including The Techies awards and her own ‘Women in Tech Speak Up’ event series. In 2017, Charlotte set up Jeneo with the mission to give women and underrepresented people a voice at Tech events. Jeneo helps companies source great individuals to speak at their events, and encourage event organisers to source panellists from a range of backgrounds. We spoke to Charlotte in a live Twitter Q&A to find out a bit more about Jeneo and Charlotte’s experience as a woman in the tech sector. Packt: Why did you decide to set up Jeneo? CJ: Good question! Basically, blame manels (aka all-male panels). I got tired of going to tech events and not seeing even one woman speaking. It's symptomatic of a wider issue with diversity within tech. And I resolved to start to do something about it! Packt: What have been your most interesting findings from the research you've carried out into women at tech events? CJ: Interestingly, the ratio of women to men speakers varies hugely across different events. However, promisingly, I've found that those who have worked on this specific issue have successfully upped their number of women speakers. The general finding was that from the top 60 London tech events, women comprise about a quarter of the speakers. The full research hasn't concluded just yet – so stay tuned. Packt: How has the industry changed since you started in Tech? Do you think it is getting easier for women to get into the Tech world? CJ: To be honest, I don't think that much progress has been made since I started working in tech. However, in the last year with scandals at Uber, Google and VC sexual harassment, the industry is starting to finally really focus on making itself more welcoming for women. Packt: What do you think is the best part of being a woman in the tech industry? CJ: I think the answer is the same regardless of your gender – the tech industry is at the forefront of the latest innovations within society and it is constantly changing, so you never get bored. Packt: What do you think is the worst part of being a woman in the tech industry? CJ: To be clear, I believe the majority of workplaces within tech are perfectly welcoming to women – however for the persistent minority that aren't, women can face all sorts of discrimination, both subtle and unsubtle. Packt: What advice would you give to a woman considering a career in the tech industry? CJ: Go for it! I honestly can't think of a better industry to work in. You can pretty much guarantee you'll be able to find work no matter what, so long as you keep your skills up to date. Packt: In what ways – positive or negative ­– do you think our education system influences the number of women in tech? CJ: Good question. I actually think that it's in education that this problem starts – from the experts I've spoken to, it seems like our education system too often seems to actively discourage women from pursuing careers in tech. Packt: What has been your biggest success in your tech career so far? CJ: I was really honoured when my boss promoted me to TechWorld News editor in 2015. Recently, it'd have to be the Women in Tech Speak Up event for 300 people I organised (single-handedly) in August 2017. Also, creating this list of 348 women working in tech in the UK which in many ways kicked this all off. Packt: What advice would you give to help tech companies to help increase their gender diversity? CJ: Way too much for one tweet here. Get senior buy-in, look carefully at your hiring process, be clear on culture/working practices, collect and publish data, provide flexible working (men want this too!) – if you want to more detail please get in touch with me. Packt: What do you think the future looks like for women working in tech? CJ: I feel more optimistic now than I have at any point. I think there is a huge amount of desire within the industry to be more inclusive. However, it's translating that goodwill into action – that's what I'll be focusing on. Packt: Are there any tech companies doing awesome things to increase diversity in their business? CJ: There's some impressive work from Monzo at the moment, who are highlighting the need for a diverse and inclusive team that represents their user base. There are plenty of others doing good work too, including Amazon UK. Packt: Are there any particular women in tech who have inspired you and who should we be following on Twitter? CJ:How long do you have? So many: @emercoleman @kitterati @ChiOnwurah @NAUREENK @annkempster @cathywhite10 @carrie_loves_ @JeniT @lily_dart @annashipman @yoditstanton That is truly just for starters. And of course, I have to add, the original woman in tech who inspired me is my Mum, Jane Jee (@janeajee) – she's CEO of RegTech startup Kompli-Global Compliance (@kompliglobal). Plus, obviously everyone on this list. Packt: Do you have any tips or advice for women to get on panels or bag speaking slots at tech events? CJ: Start small if possible, build up your conference – don't be afraid to put yourself out there! My research has found speaker submissions overwhelmingly come from men, get proactive and contact events you'd like to speak at. Thanks for chatting with us, Charlotte! Find out more about Jeneo here and follow Charlotte on Twitter: @CharlotteJee.
Read more
  • 0
  • 0
  • 5852

article-image-site-reliability-engineering-nat-welch-on-what-it-is-and-why-we-need-it-interview
Richard Gall
26 Sep 2018
4 min read
Save for later

Site reliability engineering: Nat Welch on what it is and why we need it [Interview]

Richard Gall
26 Sep 2018
4 min read
At a time when software systems are growing in complexity, and when the expectations and demands from users have never been more critical, it's easy to forget that just making things work can be a huge challenge. That's where site reliability engineering (SRE) comes in; it's one of the reasons we're starting to see it grow as a discipline and job role. The central philosophy behind site reliability engineering can be seen in trends like chaos engineering. As Gremlin CTO Matt Fornaciari said, speaking to us in June, "chaos engineering is simply part of the SRE toolkit." For site reliability engineers, software resilience isn't an optional extra - it's critical. In crude terms, downtime for a retail site means real monetary losses, but the example extends beyond that. Because people and software systems are so interdependent, SRE is a useful way for thinking about how we build software more broadly. To get to the heart of what site reliability engineering is, I spoke to Nat Welch, an SRE currently working at First Look Media, whose experience includes time at Google and Hillary Clinton's 2016 presidential campaign. Nat has just published a book with Packt called Real-World SRE. You can find it here. Follow Nat on Twitter: @icco What is site reliability engineering? Nat Welch: The idea [of site reliability engineering] is to write and modify software to improve the reliability of a website or system. As a term and field, it was founded by Google in the early 2000s, and has slowly spread across the rest of the industry. Having engineers dedicated to global system health and reliability, working with every layer of the business to improving reliability for systems. "By building teams of engineers focused exclusively on reliability, there can be someone arguing for and focusing on reliability in a way to improve the speed and efficiency of product teams." Why do we need site reliability engineering? Nat Welch: Customers get mad if your website is down. Engineers often were having trouble weighing system reliability work versus new feature work. Because of this, product feature work often takes priority, and reliability decisions are made by guess work. By building teams of engineers focused exclusively on reliability, there can be someone arguing for and focusing on reliability in a way to improve the speed and efficiency of product teams. Why do we need SRE now, in 2018? Nat Welch: Part of it is that people are finally starting to build systems more like how Google has been building for years (heavy use of containers, lots of services, heavily distributed). The other part is a marketing effort by Google so that they can make it easier to hire. What are the core responsibilities of an SRE? How do they sit within a team? Nat Welch: SRE is just a specialization of a developer. They sit on equal footing with the rest of the developers on the team, because the system is everyone's responsbility. But while some engineers will focus primarily on new features, SRE will primarily focus on system reliability. This does not mean either side does not work on the other (SRE often write features, product devs often write code to make the system more reliable, etc), it just means their primary focus when defining priorities is different. What are the biggest challenges for site reliability engineers? Nat Welch: Communication with everyone (product, finance, executive team, etc.), and focus - it's very easy to get lost in fire fighting. What are the 3 key skills you need to be a good SRE? Nat Welch: Communication skills, software development skills, system design skills. You need to be able to write code, review code, work with others, break large projects into small pieces and distribute the work among people, but you also need to be able to take a system (working or broken) and figure out how it is designed and how it works. Thanks Nat! Site reliability engineering, then, is a response to a broader change in the types of software infrastructure we are building and using today. It's certainly a role that offers a lot of scope for ambitious and curious developers interested in a range of problems in software development, from UX to security. If you want to learn more, take a look at Nat's book.
Read more
  • 0
  • 0
  • 5833
article-image-greg-walters-on-pytorch-and-real-world-implementations-and-future-potential-of-gans
Vincy Davis
13 Dec 2019
10 min read
Save for later

Greg Walters on PyTorch and real-world implementations and future potential of GANs

Vincy Davis
13 Dec 2019
10 min read
Introduced in 2014, GANs (Generative Adversarial Networks) was first presented by Ian Goodfellow and other researchers at the University of Montreal. It comprises of two deep networks, the generator which generates data instances, and the discriminator which evaluates the data for authenticity. GANs works not only as a form of generative model for unsupervised learning, but also has proved useful for semi-supervised learning, fully supervised learning, and reinforcement learning. In this article, we are in conversation with Greg Walters, one of the authors of the book 'Hands-On Generative Adversarial Networks with PyTorch 1.x', where we discuss some of the real-world applications of GANs. According to Greg, facial recognition and age progression will one of the areas where GANs will shine in the future. He believes that with time GANs will soon be visible in more real-world applications, as with GANs the possibilities are unlimited. On why PyTorch for building GANs Why choose PyTorch for GANs? Is PyTorch better than other popular frameworks like Tensorflow? Both PyTorch and Tensorflow are good products. Tensorflow is based on code from Google and PyTorch is based on code from Facebook. I think that PyTorch is more pythonic and (in my opinion) is easier to learn. Tensorflow is two years older than PyTorch, which gives it a bit of an edge, and does have a few advantages over PyTorch like visualization and deploying trained models to the web. However, one of the biggest advantages that PyTorch has is the ability to handle distributed training. It’s much easier when using PyTorch. I’m sure that both groups are looking at trying to lessen the gaps that exist and that we will see big changes in both. Refer to Chapter 4 of my book to learn how to use PyTorch to train a GAN model. Have you had a chance to explore the recently released PyTorch 1.3 version? What are your thoughts on the experimental feature - named tensors? How do you think it will help developers in getting a more readable and maintainable code? What are your thoughts on other features like PyTorch Mobile and 8-bit model quantization for mobile-optimized AI? The book was originally written to introduce PyTorch 1.0 but quickly evolved to work with PyTorch 1.3.x. Things are moving very quickly for PyTorch, so it presents an evermoving target.  Named tensors are very exciting to me. I haven’t had a chance to spend a tremendous amount of time on them yet, but I plan to continue working with them and explore them deeply. I believe that they will help make some of the concepts of manipulating tensors much easier for beginners to understand and read and understand the code created by others. This will help create more novel and useful GANs for the future. The same can be said for PyTorch Mobile. Expanding capabilities to more (and less expensive) processor types like ARM creates more opportunities for programmers and companies that don’t have the high-end capabilities. Consider the possibilities of running a heavy-duty AI on a $35 Raspberry Pi. The possibilities are endless. With PyTorch Mobile, both Android and iOS devices can benefit from the new advances in image recognition and other AI programs. The 8-bit model quantization allows tensor operations to be done using integers rather than floating-point values, allowing models to be more compact. I can’t begin to speculate on what this will bring us in the way of applications in the future. You can read Chapter 2 of my book to know more about the new features in PyTorch 1.3. On challenges and real-world applications of GANs GANs have found some very interesting implementations in the past year like a deepfake that can animate your face with just your voice, a neural GAN to fight fake news, a CycleGAN to visualize the effects of climate change, and more. Most of the GAN implementations are built for experimentation or research purposes. Do you think GANs can soon translate to solve real-world problems? What do you think are the current challenge that restrict GANs from being implemented in real-world scenarios? Yes. I do believe that we will see GANs starting to move to more real-world applications. Remember that in the grand scheme of things, GANs are still fairly new. 2014 wasn’t that long ago. We will see things start to pop in 2020 and move forward from there. As to the current challenges, I think that it’s simply a matter of getting the word out. Many people who are conversant with Machine Learning still haven’t heard of GANs, mainly due to the fact that they are so busy with what they know and are comfortable with, so they haven’t had the time and/or energy to explore GANs yet. That will change. Of course, things change on almost a daily basis, so who can guess where we will be in another two years? Some of the existing and future applications that GANs can help implement include new photo-realistic scenes for video games, movies, and television, taking sketches from designers and making realistic photographs in both the fashion industry and architecture, taking a partial facial image and making a rotated view for better facial recognition, age progression and regression and so much more. Pretty much anything with a pattern, be it image or text can be manipulated using GANs. There are a variety of GANs available out there. How should one approach them in terms of problem solving? What are the other possible ways to group GANs? That’s a very hard question to answer. You are correct, there are a large number of GANs in “the wild” and some work better for some things than others. That was one of the big challenges of writing the book.  Add to that, new GANs are coming out all the time that continue to get better and better and extend the possibility matrix. The best suggestion that I could make here is to use the resources of the Internet and read, read and read. Try one or two to see what works best for your application. Also, create your own category list that you create based on your research. Continue to refine the categories as you go. Then share your findings so others can benefit from what you’ve learned. New GANs implementations and future potential In your book, 'Hands-On Generative Adversarial Networks with PyTorch 1.x', you have demonstrated how GANs can be used in image restoration problems, such as super-resolution image reconstruction and image inpainting. How do SRGAN help in improving the resolution of images and performing image inpainting? What other deep learning models can be used to address image restoration problems? What are other keep image related problems where GANs are useful and relevant? Well, that is sort of like asking “how long is a piece of string”. Picture a painting in a museum that has been damaged from fire or over time. Right now, we have to rely on very highly trained experts who spend hundreds of hours to bring the painting back to its original glory. However, it’s still an approximation of what the expert THINKS the original was to be. With things like SRGAN, we can see old photos “restored” to what they were originally. We already can see colorized versions of some black and white classic films and television shows. The possibilities are endless. Image restoration is not limited to GANs, but at the moment seems to be one of the most widely used methods. Fairly new methods like ARGAN (Artifact Reduction GAN) and FD-GAN (Face De-Morphing GAN or Feature Distilling GAN) are showing a lot of promise. By the time I’m finished with this interview, there could be three or more others that will surpass these.  ARGAN is similar and can work with SRGAN to aid in image reconstruction. FD-GAN can be used to work with human position images, creating different poses from a totally different pose. This has any number of possibilities from simple fashion shots too, again, photo-realistic images for games, movies and television shows. Find more about image restoration from Chapter 7 of my book. GANs are labeled as innovative due to its ability to generate fake data that looks real. The latest developments in GANs allows it to generate high-dimensional fake data or image video that can easily go undetected. What is your take on the ethical issues surrounding GANs? Don’t you think developers should target creating GANs that will be good for humanity rather than developing scary AI capabilities? Good question. However, the same question has been asked about almost every advance in technology since rainbows were in black and white. Take, for example, the discussion in Chapter 6 where we use CycleGAN to create van Gogh like images. As I was running the code we present, I was constantly amazed by how well the Generator kept coming up with better fakes that looked more and more like they were done by the Master. Yes, there is always the potential for using the technology for “wrong” purposes. That has always been the case. We already have AI that can create images that can fool talent scouts and fake news stories. J. Hector Fezandie said back in 1894, "with great power comes great responsibility" and was repeated by Peter Parker’s Uncle Ben thanks to Stan Lee. It was very true then and is still just as true. How do you think GANs will be contributing to AI innovations in the future? Are you expecting/excited to see an implementation of GANs in a particular area/domain in the coming years? 5 years ago, GANs were pretty much unknown and were only in the very early stages of reality.  At that point, no one knew the multitude of directions that GANs would head towards. I can’t begin to imagine where GANs will take us in the next two years, much let the far future. I can’t imagine any area that wouldn’t benefit from the use of GANs. One of the subjects we wanted to cover was facial recognition and age progression, but we couldn’t get permission to use the dataset. It’s a shame, but that will be one of the areas that GANs will shine in for the future. Things like biomedical research could be one area that might really be helped by GANs. I hate to keep using this phrase, but the possibilities are unlimited. If you want to learn how to build, train, and optimize next-generation GAN models and use them to solve a variety of real-world problems, read Greg’s book ‘Hands-On Generative Adversarial Networks with PyTorch 1.x’. This book highlights all the key improvements in GANs over generative models and will help guide you to make the GANs with the help of hands-on examples. What are generative adversarial networks (GANs) and how do they work? [Video] Generative Adversarial Networks: Generate images using Keras GAN [Tutorial] What you need to know about Generative Adversarial Networks ICLR 2019 Highlights: Algorithmic fairness, AI for social good, climate change, protein structures, GAN magic, adversarial ML and much more Interpretation of Functional APIs in Deep Neural Networks by Rowel Atienza
Read more
  • 0
  • 0
  • 5776

article-image-selenium-and-data-driven-testing-an-interview-with-carl-cocchiaro
Richard Gall
17 Apr 2018
3 min read
Save for later

Selenium and data-driven testing: An interview with Carl Cocchiaro

Richard Gall
17 Apr 2018
3 min read
Data-driven testing has become a lot easier thanks to tools like Selenium. That's good news for everyone in software development. It means you can build better software that works for users much more quickly. While the tension between performance and the need to deliver will always remain, it's thanks to efforts of developers to improve testing tools that we are where we are today. We spoke to Carl Cocchiaro about data driven testing and much more. Carl is the author of Selenium Framework Design in Data-Driven Testing. He also talked to us about his book and why it's a useful resource for web developers interested in innovations in software testing today. What is data-driven testing? Packt: Tell us a little bit about data-driven testing. Carl Cocchiaro: Data-Driven Testing has been made very easy with technologies like Selenium and TestNG. Users can annotate test methods and add attributes like Data Providers and Groupings to them, allowing users to iterate through the methods with varying data sets. The key features Packt: What are the 3 key features of Selenium that makes it worth people's attention? CC: Platform independence, its support for multiple programming Languages, and its grid architecture that's really useful for remote testing. Packt: Could someone new to Java start using Selenium? Or are there other frameworks? CC: Selenium WebDriver is an API that can be called in Java to test the elements on a Browser or Mobile page. It is the Gold Standard in test automation, everyone should start out learning it, it's pretty fun to use. What are the main challenges of moving to Selenium? Packt: What are the main challenges someone might face when moving to the framework? CC: Like anything else, the language syntax has to be learned in order to be able to test the applications. Along with that, the TestNG framework coupled with Selenium has lots of features in Data-Driven Testing, and there's a learning curve on both. How to learn Selenium Packt: How is your book a stepping stone for a new Selenium developer? CC: The book details how to design and develop a Selenium Framework from scratch and how to build in Data-Driven Testing using TestNG and a Data Provider class. It's complex from the start but has all the essentials to create a great testing framework. They should get the basics down first before moving towards other types of testing like performance, REST API, and Mobile. Packt: What makes this book a must-have for anyone interested in or working with the tool? CC: Many Selenium guides are geared towards getting users up and running, but this is an advanced guide that teaches all the tricks and techniques I've learned over 30 years. Packt: Can you give people 3 reasons why they should read your book? CC: It's a must-read if designing and developing new frameworks, it circumvents all the mistakes users make in building frameworks, and you will be a Selenium Rockstar at your company after reading it! Learn more about software testing:  Unit Testing and End-To-End Testing Testing RESTful Web Services with Postman
Read more
  • 0
  • 0
  • 5743