A logistic regression is a model that is appropriate to use when the dependent variable is binary, i.e. 0s and 1s, True or False, Yes or No. The logistic regression is part of the regression analysis library and could therefor be interpreted as a predictive analytics model.

Pandas in Python is an awesome library to help you wrangle with your data, but it can only get you so far. When you start moving into the Big Data space, PySpark is much more effective in accomplishing what you want. This post aims at helping you migrate what you know about Pandas to PySpark. If you are new to Spark, checkout this post about Databricks, and go spin up a cluster to play around. Apache Spark and PySpark Before we get going, let’s take a step back and talk about Apache Spark. Spark is a fast and general engine for large-scale data processing. Spark uses distributed computing to accomplish higher speeds on large datasets. When you submit a request to Spark, the driver node distributes the workload to a number of worker nodes who processes parts of the request in parallel. Think of it as an improvement to original…

Before we get started, let’s answer the question “what is git?”. Git is a distributed version control system for keeping track of changes in source code during the software development process.

The goal of the platform is to make collaboration easier for teams on large projects. Git increases speed, integrity, and workflow efficiency. Git is used by more than 1,700 companies around the world (through different git platforms such as GitHub and GitLab).