Users of the Data Science Workbench are provided with a dedicated cluster, where they can access Evergage’s data through a safe and secure read-only proxy. The cluster is pre-installed with a suite of familiar tools which run on top of Apache Spark. Apache Zeppelin provides a notebook in which Python, R and Scala can be used together, sharing data across languages. Additionally, libraries allow for data to be pulled from Evergage into Spark DataFrames in a clear and well-documented manner.
Leverage Your Model Output in Evergage
Not only are your data scientists able to create custom models using the Evergage-housed data, they are also provided a pathway to integrate those learnings back into Evergage. This occurs by uploading the results of your modeling as custom attributes. For instance, you could flag particular customers as “high value,” or “seasonable buyers” or “likely to churn” based on your analysis.