Send the link below via email or IMCopy
Present to your audienceStart remote presentation
- Invited audience members will follow you as you navigate and present
- People invited to a presentation do not need a Prezi account
- This link expires 10 minutes after you close the presentation
- A maximum of 30 users can follow your presentation
- Learn more about this feature in our knowledge base article
Do you really want to delete this prezi?
Neither you, nor the coeditors you shared it with will be able to recover it again.
Make your likes visible on Facebook?
You can change this under Settings & Account at any time.
Optimization Tips/Tricks in Python
Transcript of Optimization Tips/Tricks in Python
Must be stingy when using resources Premature optimization is the root of all evil. -- Donald Knuth When to optimize?
What to optimize? Trade-offs May reduce code readability May make the program harder to maintain/debug May totally screw up your elegant design May break other parts of your program Optimization Analysis Question: Where should I search for the bottlenecks? Time Complexity (Big O) Space Complexity Amount of time needed to finish the task Amount of memory needed to store values/variables Time Complexity Big O Notation / O(n) Describes the performance or complexity of an algorithm sum_list(n) = O(n) add5(n) = O(1) Space Complexity The program has to run fast enough, but not faster! Try to do every operation in-place otherwise needed sorted() vs. .sort() Swapping Design Principles Change your solution/algorithm if you have to Avoid loops. They are your enemy. Memoization Never calculate the result for the same input again Reduce the number of repeated operations Reduce time complexity, yet you'll require extra space Factorial Performance Factorial 4! = 4 * 3! = 4 * 3 * 2! = 4 * 3 * 2 * 1! = 4 * 3 * 2 * 1 * 0! Lazy Evaluation Evaluate something only when needed Dramatically reduces space complexity Prefer xrange() instead of range() every time Use generators (yield) over eager loops range() vs. xrange() Avoid exhausting your memory range evaluates the entire list no matter what xrange defers the execution until it is evaluated String/List Concatenation Avoid using "+" whenever possible "+" results in producing a new object (Space Complexity) As fallback, you can override __add__ / __iadd__ String/List Concatenation Use .join() for strings and .extend() for lists 3rd is ~ % 30 faster than the 1st one List Comprehension They look cool and sophisticated In most cases, even faster than map() and filter() Highly optimized for the interpreter List Comprehension And again, avoid loops! ~ %54 faster Method/Variable Lookups Avoid reaching variables outside of the scope Cast global methods as a local variable ~ %15 faster Rationale is an expensive operation Data Structures Use the fastest one that fits your algorithm best There will be always trade-offs Consider based on these options: Getting Item
Testing membership of the Item It is all about the Big-O list vs. deque List Behaves like an array but it is not an array NOT fixed-length Is copied to a wider space as in need of an extra memory But, appending to the left makes the elements shift Is stored as a block so item retrieval is damn fast list vs. deque Deque Behaves just like a double linked list But, is stored fragmentally so item retrieval is slow Is never copied to a wider space in need of an extra memory No shifting occurs in case of appending to any side list vs. deque List is ~ 83-fold faster at retrieval Deque is ~ %91 faster at left-insertion Testing Membership Sets are perfect "for x in" lookups Lists must traverse the entire collection in worst case ~ %6000 faster Optimization Tips/Tricks
in Python Özgür Vatansever Before we begin Software and Hardware Does not cover Python 3.x related features I'll be testing on CPython implementation Mac OS X 10.8.2 Mountain Lion Intel Dual Core i5 2.4 GHz 6 GB 1067 MHz DDR3 RAM Python 2.7.3 ~ 68-fold faster No function call overhead compared to map/filter LOAD_ATTR Global Interpreter Lock (GIL) Prevents threads from working simultaneously CPython memory management is not thread safe No multi-core utilization Makes the process execute threads sequentially Only one thread acquire it at a time to work on the interpreter Global Interpreter Lock (GIL) SOLUTION NEW PROCESS Global Interpreter Lock (GIL) ~ %28 slower Global Interpreter Lock (GIL) Processes do not share states implying that they never have to fight for acquiring the GIL Global Interpreter Lock (GIL) Process vs. Thread Variable sharing between processes is not as easy as in threads as they work on different memory blocks Processes are expensive to spawn compared to threads Threads exist within a process so their communication is relatively easy compared to separate processes Think of GIL as a global lock wrapping your entire program that allows only one thread to execute it THANKS FOR LISTENING QUESTIONS While Thread "t1" is executing, "t2" waits Multithread Approach Multiprocess Approach