Data Warehouse

Tutorial Videos

API

Storage and Compute

Data Sources

CDC Setup

Transform

KPI

Models
Segments

Dashboard

Drill Down

Explores

Machine Learning

Sharing

Scheduling

Notifications

View Activity

Admin

Launch On Cloud

FAQs

FAQ's

Security

Feedback

Option to take feedback from UI

Release Notes

Release Notes

Overview and Creating Explore

Models

Segments

Overview

Sprinkle provides a consolidated view of your data from multiple data sources. Using explore you can write sophisticated SQL queries on the data that you have imported from multiple data sources thereby revealing insights from your business data.

The page lists all the data exploration that have been created by you and other team members.

  • My Explores tab
  • Lists all data exploration which are owned by you.
  • Shared with me tab
  • Lists all the data exploration (identified by the owner field).

Creating Explore

To create explore click on “New”. Name and create explore. Now you can run your queries.

   

Creating explore

   

Explore View

The page lists the result of the latest job-run of your explore report. The explore report can be scheduled thereby relieving you for running the same report again and again. The explore report run on the latest data from various imported data sources providing you the latest insights of your data. You can use explore report for your daily, weekly or monthly reports.

Email button

Email report to your team members. Email can be scheduled daily, weekly or monthly.

   

Explore view

   

Explore Edit

This page provides the ability to modify your existing explore reports by changing the SQL query, running report again or scheduling the explore report for auto-run.

   

Explore edit

   

  • Save and Run
  • After you are done editing your report click this button to get the latest results. A new job is launched and results are available in the Result view below.
  • Auto-Run
  • Schedule explore report. The explore report will run every night at scheduled time of 00:30 AM. Users with Developer role can change the default schedule.
  • Save
  • Save your changes

Schema Browser

This lists your databases and tables within those databases. You can select a database and table and it will lists all the columns and there type information.

   

Schema browser

   

Query

Type your SQL query here. After building your query click SAVE AND RUN to see the results. Any error in the query can be seen in the Jobs tab. With the number provided for each line, identification of error comes handy.

   

Query

   

Jobs

For every run of your query a job is launched against your cluster. A job has a state of Queued, Running, Success, Failed and Cancelled. To see the Job result, click Show. You can also see the results of the old job run.

   

Query

   

Result

The results of the jobs are displayed as a data table. You can download the results in a CSV file too.

   

Query


import requests
from requests.auth import HTTPBasicAuth

auth =  HTTPBasicAuth(<API_KEY>, <API_SECRET>)
response = requests.get("https://<hostname>/api/v0.4/explore/streamresult/<EXPLORE_ID>", auth)

print(response.content)

library('httr')

username = '<API KEY>'
password = '<API SECRET>'

temp = GET("https://<hostname>/api/v0.4/explore/streamresult/<EXPLORE ID>",
           authenticate(username,password, type = "basic"))

temp = content(temp, 'text')
temp = textConnection(temp)
temp = read.csv(temp)

/*Download the Data*/

filename resp temp;
proc http
url="https://<hostname>/api/v0.4/explore/streamresult/<EXPLORE ID>"
   method= "GET"  
   WEBUSERNAME = "<API KEY>"
   WEBPASSWORD = "<API SECRET>"
   out=resp;
run;

/*Import the data in to csv dataset*/
proc import
   file=resp
   out=csvresp
   dbms=csv;
run;

/*Print the data */
PROC PRINT DATA=csvresp;
RUN;

import requests
import json

url='http://hostname/api/v0.4/createCSV'

username='API_KEY'
password='API_SECRET'

files={'file':open('FILE_PATH.csv','rb')}
values={'projectname':PROJECT_NAME','name':'CSV_DATASOURCE_NAME'}

r=requests.post(url, files=files, data=values, auth=(username,password))

res_json=json.loads(r.text)

print(res_json['success'])

import requests
import json

url='http://hostname/api/v0.4/updateCSV'

username='API_KEY'
password='API_SECRET'

files={'file':open('FILE_PATH.csv','rb')}
values={'projectname':PROJECT_NAME','name':'CSV_DATASOURCE_NAME'}

r=requests.post(url, files=files, data=values, auth=(username,password))

res_json=json.loads(r.text)

print(res_json['success'])

import requests

url='https://<hostname>/api/v0.4/explore/streamresult/<EXPLORE ID>'

username='API_KEY'
password='API_SECRET'

r=requests.get(url,auth=(username,password))
print(r)
print(r.text)

import requests

import pandas as pd

import io

url='https://<hostname>/api/v0.4/explore/streamresult/<EXPLORE ID>'

secret='API_SECRET'

r=requests.get(url,headers = {'Authorization': 'SprinkleUserKeys ' +secret } )

df = pd.read_csv(io.StringIO(r.text),sep=',')

import requests

import pandas as pd

import io

url='https://<hostname>/api/v0.4/segment/streamresult/<SEGMENT ID>'

secret='API_SECRET'

r=requests.get(url,headers = {'Authorization': 'SprinkleUserKeys ' +secret } )

df = pd.read_csv(io.StringIO(r.text),sep=',')

import requests

import json

url='http://hostname/api/v.o4/createCSV'

files={'file':open('path/file.csv’')}

values={'projectname':PROJECT_NAME,'name':'csv_datasource_name/table_name'}

secret='API_SECRET'

r=requests.post(url, files=files, data=values, headers = {'Authorization': 'SprinkleUserKeys ' +secret } )

res_json=json.loads(r.text)

import requests

import json

url='http://hostname/api/v.o4/updateCSV'

files={'file':open('path/file.csv’')}

values={'projectname':PROJECT_NAME,'name':'csv_datasource_name/table_name'}

secret='API_SECRET'

r=requests.post(url, files=files, data=values,headers = {'Authorization': 'SprinkleUserKeys ' +secret } )

res_json=json.loads(r.text)