JavaScript isn't enabled in your browser, so this file can't be opened. Enable and reload.
Ray Serve User Survey!
We need your feedback! We are constantly trying to better understand how and why people use Ray Serve so we can improve the library and make sure we are prioritizing the right things.
Please also join us in the #serve channel of the Ray Slack:
https://forms.gle/9TSdDYUgxYs8SA9e8
Sign in to Google
to save your progress.
Learn more
* Indicates required question
Email Address!
*
Your answer
What organization do you work for (company, university)?
*
Your answer
Which best describes your current serving stack?
*
Not using machine learning yet
Only doing batch/offline inference
Serving ML models with a generic web serving tool like Flask, FastAPI, etc.
Serving ML models with a specialized ML serving framework (Ray Serve, SageMaker, KFServing, etc.)
Other:
What is the most important serving framework feature for you?
Ease of use
Python-native API
Integration with existing web servers (e.g., FastAPI)
Model composition / pipelines
Easy scaling
Performance (high throughput, low. latency)
Other:
What model serving options have you looked at/considered?
AWS SageMaker
Google Vertex AI
Azure ML
Seldon Core
KFServing
TorchServe
TensorFlow Serving
ONNX
BentoML
Flask/FastAPI/DIY web server
Other:
What cloud(s) are you running on?
AWS
GCP
Azure
IBM
On-premise
Other:
What stage are you in with Ray Serve?
*
Not using Ray Serve (just checking it out)
Prototyping / running a proof of concept (POC)
Going to production
Actively in production
Other:
Anyscale is currently working on building a managed Ray Serve solution to handle the operational burden of going to production. Are you interested in learning more?
Yes
No
Clear selection
Would you be interested in a hands-on tutorial showing how to deploy Ray Serve as managed Service?
Yes
No
Clear selection
Next
Page 1 of 2
Clear form
Never submit passwords through Google Forms.
This form was created inside of Anyscale, Inc..
Report Abuse
Forms