```
import math
print("%0.20f" % math.e)
print("%0.80f" % math.e)
```

```
2.71828182845904509080
2.71828182845904509079559829842764884233474731445312500000000000000000000000000000
```

You have programmed in Python. Regardless of your skill level, let us do some refreshing.

- Function: a block of organized, reusable code to complete certain task.
- Module: a file containing a collection of functions, variables, and statements.
- Package: a structured directory containing collections of modules and an
`__init.py__`

file by which the directory is interpreted as a package. - Library: a collection of related functionality of codes. It is a reusable chunk of code that we can use by importing it in our program, we can just use it by importing that library and calling the method of that library with period(.).

See, for example, how to build a Python libratry.

Python’s has an extensive standard library that offers a wide range of facilities as indicated by the long table of contents listed below. See documentation online.

The library contains built-in modules (written in C) that provide access to system functionality such as file I/O that would otherwise be inaccessible to Python programmers, as well as modules written in Python that provide standardized solutions for many problems that occur in everyday programming. Some of these modules are explicitly designed to encourage and enhance the portability of Python programs by abstracting away platform-specifics into platform-neutral APIs.

Question: How to get the constant \(e\) to an arbitary precision?

The constant is only represented by a given double precision.

```
import math
print("%0.20f" % math.e)
print("%0.80f" % math.e)
```

```
2.71828182845904509080
2.71828182845904509079559829842764884233474731445312500000000000000000000000000000
```

Now use package `decimal`

to export with an arbitary precision.

```
import decimal # for what?
## set the required number digits to 150
= 150
decimal.getcontext().prec 1).exp().to_eng_string()
decimal.Decimal(1).exp().to_eng_string()[2:] decimal.Decimal(
```

`'71828182845904523536028747135266249775724709369995957496696762772407663035354759457138217852516642742746639193200305992181741359662904357290033429526'`

- NumPy
- pandas
- matplotlib
- IPython/Jupyter
- SciPy
- scikit-learn
- statsmodels

Question: how to draw a random sample from a normal distribution and evaluate the density and distributions at these points?

```
from scipy.stats import norm
= 2, 4
mu, sigma = norm.stats(mu, sigma, moments='mvsk')
mean, var, skew, kurt print(mean, var, skew, kurt)
= norm.rvs(loc = mu, scale = sigma, size = 10)
x x
```

`2.0 16.0 0.0 0.0`

```
array([ -4.44562667, 5.4660809 , 3.5125469 , -11.83950295,
2.90749401, 4.70256047, 4.90975057, 7.73920953,
3.50416655, -4.78896058])
```

The pdf and cdf can be evaluated:

`= mu, scale = sigma) norm.pdf(x, loc `

```
array([0.02722693, 0.06851781, 0.09285403, 0.00025086, 0.09720154,
0.07938247, 0.07654966, 0.03563019, 0.09292742, 0.02362275])
```

Consider the Fibonacci Sequence \(1, 1, 2, 3, 5, 8, 13, 21, 34, ...\). The next number is found by adding up the two numbers before it. We are going to use 3 ways to solve the problems.

The first is a recursive solution.

```
def fib_rs(n):
if (n==1 or n==2):
return 1
else:
return fib_rs(n - 1) + fib_rs(n - 2)
%timeit fib_rs(10)
```

`10.5 µs ± 331 ns per loop (mean ± std. dev. of 7 runs, 100,000 loops each)`

The second uses dynamic programming memoization.

```
def fib_dm_helper(n, mem):
if mem[n] is not None:
return mem[n]
elif (n == 1 or n == 2):
= 1
result else:
= fib_dm_helper(n - 1, mem) + fib_dm_helper(n - 2, mem)
result = result
mem[n] return result
def fib_dm(n):
= [None] * (n + 1)
mem return fib_dm_helper(n, mem)
%timeit fib_dm(10)
```

`2.38 µs ± 26.5 ns per loop (mean ± std. dev. of 7 runs, 100,000 loops each)`

The third is still dynamic programming but bottom-up.

```
def fib_dbu(n):
= [None] * (n + 1)
mem 1]=1;
mem[2]=1;
mem[for i in range(3,n+1):
= mem[i-1] + mem[i-2]
mem[i] return mem[n]
%timeit fib_dbu(500)
```

`63.9 µs ± 478 ns per loop (mean ± std. dev. of 7 runs, 10,000 loops each)`

Apparently, the three solutions have very different performance for larger `n`

.

Here is a function that performs the Monte Hall experiments.

```
import numpy as np
def montehall(ndoors, ntrials):
= np.arange(1, ndoors + 1) / 10
doors = np.random.choice(doors, size=ntrials)
prize = np.random.choice(doors, size=ntrials)
player = np.array([np.random.choice([d for d in doors
host if d not in [player[x], prize[x]]])
for x in range(ntrials)])
= np.array([np.random.choice([d for d in doors
player2 if d not in [player[x], host[x]]])
for x in range(ntrials)])
return {'noswitch': np.sum(prize == player), 'switch': np.sum(prize == player2)}
```

Test it out:

```
3, 1000)
montehall(4, 1000) montehall(
```

`{'noswitch': 236, 'switch': 388}`

The true value for the two strategies with \(n\) doors are, respectively, \(1 / n\) and \(\frac{n - 1}{n (n - 2)}\).

In Python, variables and the objects they point to actually live in two different places in the computer memory. Think of variables as pointers to the objects they’re associated with, rather than being those objects. This matters when multiple variables point to the same object.

```
= [1, 2, 3] # create a list; x points to the list
x = x # y also points to the same list in the memory
y 4) # append to y
y.append(# x changed! x
```

`[1, 2, 3, 4]`

Now check their addresses

```
print(id(x)) # address of x
print(id(y)) # address of y
```

```
4959635584
4959635584
```

Nonetheless, some data types in Python are “immutable”, meaning that their values cannot be changed in place. One such example is strings.

```
= "abc"
x = x
y = "xyz"
y x
```

`'abc'`

Now check their addresses

```
print(id(x)) # address of x
print(id(y)) # address of y
```

```
4554513544
4675105776
```

Question: What’s mutable and what’s immutable?

Anything that is a collection of other objects is mutable, except `tuples`

.

Not all manipulations of mutable objects change the object rather than create a new object. Sometimes when you do something to a mutable object, you get back a new object. Manipulations that change an existing object, rather than create a new one, are referred to as “in-place mutations” or just “mutations.” So:

**All**manipulations of immutable types create new objects.**Some**manipulations of mutable types create new objects.

Different variables may all be pointing at the same object is preserved through function calls (a behavior known as “pass by object-reference”). So if you pass a list to a function, and that function manipulates that list using an in-place mutation, that change will affect any variable that was pointing to that same object outside the function.

```
= [1, 2, 3]
x = x
y
def append_42(input_list):
42)
input_list.append(return input_list
append_42(x)
```

`[1, 2, 3, 42]`

Note that both `x`

and `y`

have been appended by \(42\).

Numers in a computer’s memory are represented by binary styles (on and off of bits).

If not careful, It is easy to be bitten by overflow with integers when using Numpy and Pandas in Python.

```
import numpy as np
= np.array(2**63 - 1 , dtype='int')
x
x# This should be the largest number numpy can display, with
# the default int8 type (64 bits)
```

`array(9223372036854775807)`

What if we increment it by 1?

```
= np.array(x + 1, dtype='int')
y
y# Because of the overflow, it becomes negative!
```

`array(-9223372036854775808)`

For vanilla Python, the overflow errors are checked and more digits are allocated when needed, at the cost of being slow.

`2**63 * 1000`

`9223372036854775808000`

This number is 1000 times largger than the prior number, but still displayed perfectly without any overflows

Standard double-precision floating point number uses 64 bits. Among them, 1 is for sign, 11 is for exponent, and 52 are fraction significand, See https://en.wikipedia.org/wiki/Double-precision_floating-point_format. The bottom line is that, of course, not every real number is exactly representable.

`0.1 + 0.1 + 0.1 == 0.3`

`False`

`0.3 - 0.2 == 0.1`

`False`

What is really going on?

```
import decimal
0.1) decimal.Decimal(
```

`Decimal('0.1000000000000000055511151231257827021181583404541015625')`

Because the mantissa bits are limited, it can not represent a floating point that’s both very big and very precise. Most computers can represent all integers up to \(2^{53}\), after that it starts skipping numbers.

```
2.1**53 + 1 == 2.1**53
# Find a number larger than 2 to the 53rd
```

`True`

```
= 2.1**53
x for i in range(1000000):
= x + 1
x == 2.1**53 x
```

`True`

We add 1 to `x`

by 1000000 times, but it still equal to its initial value, 2.1**53. This is because this number is too big that computer can’t handle it with precision like add 1.

Machine epsilon is the smallest positive floating-point number `x`

such that `1 + x != 1`

.

```
print(np.finfo(float).eps)
print(np.finfo(np.float32).eps)
```

```
2.220446049250313e-16
1.1920929e-07
```