Loading presentation...

Present Remotely

Send the link below via email or IM

Copy

Present to your audience

Start remote presentation

  • Invited audience members will follow you as you navigate and present
  • People invited to a presentation do not need a Prezi account
  • This link expires 10 minutes after you close the presentation
  • A maximum of 30 users can follow your presentation
  • Learn more about this feature in our knowledge base article

Do you really want to delete this prezi?

Neither you, nor the coeditors you shared it with will be able to recover it again.

DeleteCancel

Make your likes visible on Facebook?

Connect your Facebook account to Prezi and let your likes appear on your timeline.
You can change this under Settings & Account at any time.

No, thanks

Microservices and the Need to Use Your Resources More Considerately

The story of breaking apart the monolithic backend and dealing with unexpected slow page load times
by

Johannes Schirrmeister

on 6 July 2015

Comments (0)

Please log in to add your comment.

Report abuse

Transcript of Microservices and the Need to Use Your Resources More Considerately

Microservices and the Need to Use Your Resources More Considerately
San Francisco Django Meetup October 2014
Transitioning to Microservices
Conclusion
Thank you!
Seeing What We Got
Increased reliability
Optimizing performance
Starts with optimizing tools
Searching for improvement
Best Of
Big monolithic
backend
Not so big monolithic backend
Presentation-
service
Before
After
Thrift
Acceptable page load times
Easier development
Landing Page
Time consumption per page
Load times for different percentiles
Breakdown for different components
Sample requests
beware of averages!
for p in user_public_prezis:
p.owner.user_profile.profile_url
Django Debug Toolbar
Extended with custom panel for Thrift Calls
Moving to a microservice architecture introduces latency and forces you to be more considerate about when to access your data sources
@j_schirrmeister
Optimizing Performance
Cache everywhere
Query in batches
Request-level caching
def get(self, id, ignore_global_cache=False):
if ignore_global_cache:
return super(RemoteUserFactory, self).get(id)
else:
return user = get_user(id, request=self.request)
Invalidation?!
Not so big monolithic backend
Presentation-
service
Landing Page
Worker
REST API
for p in permissions:
context[full_name] = p.user.get_full_name()
context[avatar] = p.user.user_profile.facebook.get_picture()
user_ids = [p.user_id for p in permissions]
users = get_users(user_ids)
users_avatars = get_avatars(user_ids)

ORMs convenient, but leading to inefficiencies
for p in public_prezis:
context[owner_profile_url] = p.owner.user_profile.profile_url

hacky? yes
More Optimizations
for p in public_prezis:
user = get_user(p.owner, request)
user_profile = get_user_profile(user, request)
context[owner_profile_url] = user_profile.profile_url
100ms
300ms
much time for inefficient queries
200ms
2 requests across data centers
premature optimization is the root of all evil
historically several storage backends with configurations in MySQL
Storage.objects.get(pk=self.storage_id)
through caching and batching
70% drop
Cache
for p in permissions:
context[full_name] = p.user.get_full_name()
context[avatar] = p.user.user_profile.facebook.get_picture()
for p in permissions:
context[full_name] = users[p.user_id].get_full_name()
context[avatar] = users_avatars[c.id]
but effective!
Lost performance can be regained
Invitation
Prezi Office in Budapest
Craft Conference, April 22-24
Full transcript