Wawan Setiawan and his daughter Alexandra Kuznetsova


Perspektive seorang atheis yg pernah membaca filsafat marxist-leninist ttg beberapa hal terutama nature…
May 31, 2013, 5:02 am
Filed under: Uncategorized

Perspektive seorang atheis yg pernah membaca filsafat marxist-leninist ttg beberapa hal terutama nature:

quantum mechanic basic philosophy:

“saya sendiri berubah, dan anda sendiri juga berubah,
bukan akibat saya berubah lantas anda menjadi berubah, dan juga bukan akibat anda berubah lantas saya sendiri berubah,

saya sendiri berubah secara unique (tidak pernah sama),
dan anda sendiri juga berubah secara unique (tidak pernah sama)

perubahan diri saya yang unique tentu saja tidak pernah sama dengan perubahan diri anda yang juga unique

namun akibat kita berinteraksi terus menerus secara unique (interaksi-nya juga tidak pernah sama), maka saya berubah secara unique (tidak pernah sama terus menerus) dan anda juga berubah secara unique (tidak pernah sama terus menerus)

//////////////

Kata pengantar/preface:

Marxist-Leninist-Materialist, sebenarnya terbagi dua, yalah ranah filsafat atau pemikiran, atau interprestasi thd universe dan manusia, dan gerakan politik

tapi saya memilih berada di pendidikan filsafatnya daripada gerakan politik-nya

atau mungkin saya akan mendirikan sekolah Marxist-Leninist-Materialist daripada membuat Partai Komunis

++++++++++

A. tentang filsafat
Filsafat adalah interprestasi atau bagaimana cara berpikir manusia/weltanschauung, khususnya thd universe

ilmu fisika tentu saja bagian daripada filsafat tsb

Mengacu kepada Richard Feynman yang mengatakan [free translate]
“orang mengatakan saya fisikawan, …bukan…saya bukan fisikawan, saya hanyalah orang yang ingin selalu mengetahui/knowing thd universe”

so Feynman sudah masuk di ranah filsafat, demikian juga Albert Einstein, Werner Heisenberg, Erwin Schrodinger, saya membaca beberapa karyanya yg itu sebenarnya bagi saya sudah masuk ke ranah filsafat

++++++++++

B. Mengapa Filsafat Marxist – Leninist – Materialist Dialektika?

Filsafat didasarkan kepada pendekatan realitas universe dan kehidupan real kita sehari hari, filsafat ini jauh dari mitologi, sehingga saya akan memberi contohnya dengan kejadian kejadian alam yg mudah dimengerti

disini, saya menyatakan:

[penting]

1. “manusia, tidak akan pernah bisa memahami realitas absolut universe, manusia hanya akan melakukan observasi knowledge yang semakin mendekati akurat saja

Namun, se-akurat-akuratnya interprestasi manusia, itu bukanlah realitas absolut universe.”

2. “partikel akan selalu mengalami perubahan, hukum utama universe adalah *yg konstan dan tidak pernah berubah adalah perubahan itu sendiri*

sehingga dalam rentang waktu yg lama akan banyak terjadi perubahan, alam berubah, dan manusia mencoba melakukan interprestasi baru lagi yang mendekati akurasi thd realitas absolut universe [base on pemikiran dialektika]”

+++++++++++

C. Penjelasan Hukum perubahan terus menerus thd identitas diri

Orang Marxis mengenal istial “A tidak pernah selalu sama dengan A”

saya akan coba kasih penjelasan dengan realitas sehari hari

– 1 liter air, angka 1 adalah math, itu tidak meyatakan material air yg sesungguhnya, …so kita generalisasi dulu bahwa ada 1 liter air

kemudian kita ukur secara/base on waktu, material air yang dinyatakan secara math adalah 1 liter

a. detik pertama, material “1 liter air” sudah berubah, mungkin terjadi penguapan, so material “1 liter air” berubah menjadi material “1 liter berkurang 0.00001 ml air”

ini sementara kita sebut sebagai perubahan internal, nanti akan saya jelaskan sebab perubahan internal, tentang benda/things yang selalu berubah dan tidak pernah sama dengan dirinya secara terus menerus

b. Manusia, misal saya, sejak lahir ceprot, sampai umur 38 tahun saat ini, sebenarnya komposisi material tubuh saya tidak pernah sama dari waktu ke waktu

saya, wawan, …akan berubah komposisi material/molecule dalam waktu 1 detik berikutnya, dan 1 detik berikutnya lagi

prinsipnya, saya ..wawan, dari lahir sampai umur saya 38 tahun lebih beebrapa hari untuk saat ini, tidak pernah mempunyai komposisi dan kuantitave material/molecule biologis yang sama, dan akan selalu berubah terus menerus seiring usia saya

c. Kita mengenal Brain/otak
Otak adalah sekumpulan matter, therefore otak adalah energy dan wave/gelombang

secara general, gelombang otak dibagi menjadi 4 frekuensi, yalah beta, alpha, theta, dan delta

rangenya adalah
Beta 13-30 Hz
Alpha 8-13 Hz
Theta 4-8 Hz
Delta 0.5-4 Hz

so pertanyannya,

saya, wawan…berumur 38 tahun, sejak saya lahir sampai saat ini apakah frekeuensi otak saya pernah sama absolut?

jawab:
– penjelasan pertama: apabila pengukuran menggunakan generalisasi 4 jenis frekuensi otak, yaitu beta, alpha, theta, dan delta serta menggunakan satuan frekuensi Herzt, maka gelombang otak saya pernah sama didalam hidup saya

namun apabila saya menggunakan ukuran, atau cara mengukurnya mengunakan ukuran 1/1.000.000 hertz misalnya

maka gelombang otak saya tidak akan pernah sama sejak saya lahir sampai saya hidup saat ini berusia 38 tahun sekian hari…

penjelasan kedua:
– material komposisi tubuh biologis saya, sejak lahir sampai saya berumur 38 tahun, sekian hari, tidak pernah sama, so konsekuensinya gelombang yang merupakan manifestasi partikel/molecule juga tidak akan pernah sama, sejak saya lahir sampai berumur 38 tahun sekian hari

++++++++++

D. Tentang Konstanta

kemaren dalam sebuah diskusi, saya menyatakan konstanta gravitasi, yang dinyatakan G (huruf g besar) ini relativ

jika diukur di bulan, dan di bumi akan berbeda, namun skala perbedaannya/deviasi terlalu kecil/mikro

namun akan sangat berbeda jika methodologi pengukurannya dilakukan di blackhole atau planet yang mempunyai 1.000.000 gravitasi lebih besar dari gravitasi bumi

penjelasan:
Konstanta, adalah bilangan atau math, ia didapatkan bukan secara mistis, tapi akibat dari adanya interaksi materi/partikel yang ada di sekelilingnya (quantum mechanic – all mass is interaction)

partikel/materi di bumi, lain dengan di jupiter, blackhole, atau planet yg mempunyai 1.000.000x gravitasi bumi

[penting]
“so hasil yg didapatkan dari cara menghitung Konstanta, juga relativ dan akan berubah terus menerus tergantung komposisi/keadaan dan gerakan material di sekitarnya, termasuk akibat gravitational”

++++++++++++++++

E. Hukum dialektika, perubahan kuantitas menjadi kualitas

Di Bumi, kita mengenal banyak laws of physic, misal archimedes

semua hukum physic yg kita kenal di bumi tidak lepas dari adanya gravitational bumi, misal saya nyatakan X

apabila ada planet Y, dengan gravitational 1 juta x lipat dari Bumi, arinya secara kuantitative berjumlah 1 juta x lipat, maka itu sudah bisa merubah laws of physic (perubahan kualitatif)

contohnya adalah Blackhole, dimana photon terperangkap kekuatan gravitasi sehingga radiasi cahaya-nya menjadi sangat minimal/saya tidak menyatakan 0 atau zero, tapi seminimal mungkin, misal 0.00000000000000000001

di Blackhole, akan banyak laws of physic yg berjalan di bumi, tapi tidak berjalan di blackhole, akibat kuatnya gravitasi

F. Penyebab Perubahan internal secara terus menerus

setiap partikel, akan selalu berubah baik massa, energy, dan gelombangnya.
misal partikel a, saat ini punya massa dinyatakan dalam variabel angka 10

1 detik kemudian partikel a akan berubah dan mempunyai massa 9.9999999999999 misalnya

1 detik berikutnya partikel a akan berubah lagi dan mempunyai massa 10.0000000000001 misalnya

[catatan]
tentu saja perubahan massa akan mengakibatkan perubahan energy dan gelombang

partikel a, atau setiap partikel di universe, akan mengalami perubahan terus menerus yang tidak pernah sama atau berbeda, namun dalam skala sangat mikroskopik

[penting]
***
setiap perubahan partikel a, dan setiap perubahan partikel b juga tidak pernah sama, karena setiap partikel adalah unique, dalam skala mikroskopik tidak ada yang sama, termasuk semua perubahannya yang kekal
***

juga seperti saya tadi, saya tidak pernah sama dari lahir sampai umur saya 38 sekian hari,

namun partikel adalah kekal, dengan kekekalannya, partikel ini tidak pernah punya massa, energy, dan gelombang yang sama, atau secara konstan akan selalu berubah ubah

****
“Penyebab mengapa partikel selalu berubah massa, energy, dan gelombangnya, adalah disebabkan oleh faktor internal dan external yang terus menerus saling berinteraksi dan tidak bisa dihentikan atau dipisahkan antar partikel satu dengan yang lainnya”

“kita tidak bisa menyatakan penyebab perubahan partikel itu akibat faktor internal atau eksternal saja, karena semua partikel berinteraksi, saling bertranformasi, dan satu dengan yg lainnya tidak bisa dipisahkan secara absolut”
***

G. Penutup

Saya pikir, banyak tulisan saya, yg salah kata/kalimat yg bisa menyebabkan salah logic,

misal kemaren saya menulis 1 liter penunjuk digital gasoline, tidak akan pernah sama dengan material gasoline, bisa lebih “dan” kurang

logika “dan” itu salah, yg benar logika “or”

mungkin ada yg mau membantu saya untuk menulis buku, dan merapikan tulisan saya ttg pandangan dan interprestasi saya ttg universe?

May 31 2013

[Ws]

on…perspektive materialisme dialektika, fisika, quantum mechanic, thd universe



MIT Computer Science (optimization problems)
January 9, 2013, 1:01 am
Filed under: Uncategorized
  1. PROFESSOR: In the modern world, we often have to search through many
  2. alternatives to find out which combination will
  3. give us the best result.
  4. For example, what stock portfolio will give us the greatest return at
  5. acceptable risk?
  6. Or what combination of trams and buses will get me across town in time for
  7. the concert tonight?
  8. Or we might want to know what choice of rocket booster and trajectory will
  9. provide the minimum transit time for a space capsule to Mars.
  10. These problems, called optimization problems, have been studied for many
  11. years and good computational solutions exist for many of them.
  12. If you have your own optimization problem, you’ll probably find it can
  13. be reduced to one of the classic optimization problems, and you’ll be
  14. able to adopt one of the existing solutions.
  15. Over the next few lectures, we’re going to be looking at some of the
  16. classic optimizations.
  17. And of course, you’ll get to experiment with some of the
  18. solutions in Python.
  19. As we tackle each of the problems, spend some time thinking about how you
  20. would approach the problem.
  21. It can be a lot of fun.
  22. Let’s get started.
  23. Optimization problems share some common characteristics.
  24. Suppose we wanted to find the minimum airfare from Boston to San Francisco
  25. on a Monday or Tuesday.
  26. Well the first thing we notice is that the statement of the problem has an
  27. implicit objective function.
  28. So there’s our objective function– the minimum airfare.
  29. So often the objective function involves mins or maxes or biggest or
  30. smallest or something like that.
  31. Notice that some objective functions actually require us to look at all
  32. possible solutions so that we can choose the one that is the minimum or
  33. the maximum.
  34. After all, how would we know that we had found the minimum unless we
  35. examined all the other ones to see that they were bigger.
  36. The second thing that we notice is a set of constraints this
  37. solution must satisfy.
  38. In this case, we’re interested in flights that happen
  39. on Monday or Tuesday.
  40. That seems pretty straightforward.
  41. We could obviously generate solutions and check to see if they satisfy the
  42. constraints.
  43. And then if they do, we can apply the objective function and decide if this
  44. is the one that we are looking for.
  45. Sometimes a set of constraints can help constrain the search.
  46. That’s why they’re called constraints.
  47. For example, with the airfare, we might only consider flights that were
  48. flying on Monday or Tuesday, and from the set of all possible flights, that
  49. would be a considerable reduction in the amount of work we would have to do
  50. to come up with a final answer.
  51. Before we look into how to write computer programs to solve
  52. optimization problems, let’s take a moment and look at some of the more
  53. interesting ones.
  54. Here’s a classic optimization problem from the world of chess.
  55. The goal is to place, in this case, 8 queens on an 8 by 8 board such that
  56. none of the queens is attacking each other.
  57. That would mean that no 2 queens occupy the same
  58. row, column, or diagonal.
  59. Well if we’re going to program a computer to solve this problem, we
  60. might try a simple brute force approach where we would start by
  61. placing the first queen on a square and then think about where we could
  62. place the second queen.
  63. And we wouldn’t be able to place it anywhere in the first column, nor in
  64. the first row, the second column, or in the second row, the second column.
  65. But we could place the second queen here.
  66. And then we could go on and think about how to place the third queen and
  67. the fourth queen, so forth and so on.
  68. We might eventually come to a situation where it would be impossible
  69. to place some queen, because there would be no remaining position on the
  70. board where that queen could go.
  71. And that might mean that we’d have to back up and reconsider one of our
  72. earlier choices.
  73. For example, the final solution might involve having the second queen
  74. actually be in this square.
  75. And then we’d have to redo the placement of all
  76. the subsequent queens.
  77. So this is an example that we’re sort of step by step slowly enumerating all
  78. possible solutions.
  79. Well in this case, there are 64 squares on 8 by 8 board, and we’re
  80. trying to choose 8 of the locations.
  81. So 64 choose 8 is about 4 billion possibilities, so the enumeration
  82. would take a long time.
  83. However, we will eventually come to an answer, and one of the characteristics
  84. of optimization problems is that there’s almost always a brute force
  85. approach that involves an exhaustive enumeration of all the possibilities.
  86. Another classic problem is bin packing.
  87. In this problem, we’re given many objects of different sizes, and our
  88. goal is to pack them into bins of a fixed capacity.
  89. And so in this case, our capacity of each bin is 80.
  90. So we can see here that if we stack up objects of size 26, and 8, and 45,
  91. we’ve basically almost filled up the first bin.
  92. The goal of the problem is to pack all the objects in the bin, but if we can,
  93. is to minimize the number of bins.
  94. You can imagine there’s many different ways to think about how to go about
  95. this problem.
  96. For example, you could just simply take the objects as they came and
  97. stack them up one at a time in a bin, and when the next object wouldn’t fit,
  98. you simply move to the next bin and keep going until you’ve used up all
  99. the objects.
  100. Of course that would leave a bunch of bins with room at the top, and
  101. perhaps, some small object would fit there.
  102. So another alternative would be to simply leave bins open all the time,
  103. and every time you have an object, you look for the first bin
  104. that will it fit in.
  105. Or maybe you look for the bin that has the most room and put it in there.
  106. Or maybe you’ll look for the bin that has the least amount of room remaining
  107. but which the object would fit in.
  108. So forth and so on.
  109. It’s actually fascinating to think through all the different ways you
  110. could tackle this problem.
  111. You can see that if we tried to enumerate all the possible solutions,
  112. however, particularly when there’s many, many, many choices to be made,
  113. it would be almost computationally impossible.
  114. It would take years of computer time to figure out what we’re doing, to
  115. figure out all the possible combinations.
  116. So this is an example of a problem where almost certainly we’ll have to
  117. figure out a heuristic or approximate solution that will let us make some
  118. progress and hope that it produces, as best we can, the
  119. smallest number of bins.
  120. For our next example, suppose you worked at a cabinet shop and your job
  121. was to cut doors and cabinet sides out of large sheets of plywoods.
  122. The objective function is to minimize the waste.
  123. What you’d like to do is to take any lumber that ends up getting thrown
  124. away and make sure that’s the least amount of lumber possible.
  125. The constraint that you have to satisfy is that you can only do
  126. guillotine cuts.
  127. Now, in a cabinet shop, you have to use a table saw to make a cut, and the
  128. table saws really want to cut all the way across a piece of stock, like so.
  129. So that would be a guillotine cut, the first one you would make.
  130. So the goal is to figure out a plan for laying out the pattern and then
  131. the sequence of cuts to minimize the waste and only do guillotine cuts.
  132. What’s interesting about this particular problem is you notice that
  133. once we’ve made our first cut, we have a similar subproblem.
  134. So in other words, now we have a smaller problem that’s exactly the
  135. same of taking these 3 pieces and laying them out on a smaller piece of
  136. plywood and solving the problem once again.
  137. So this is often a structure you find in optimization problems, where you’re
  138. sort of have to think a little bit about how maybe solving the problem
  139. involves really solving 2 subproblems and then looking to see if the
  140. solution produces a minimum cost.
  141. Here’s an example with a different kind of cutting.
  142. Imagine that this is a description of an electrical grid, and we have a
  143. generating station at this end, and a city down here at this end, and the
  144. intermediate points are all substations.
  145. And the arrows represent the capacity and the numbers represent transmission
  146. lines and their capacity.
  147. And what we’re interested in doing is computing the maximum flow from the
  148. starting point down to the ending point.
  149. How much current can flow through this network?
  150. Well, the answer to this problem is related to the following observation.
  151. Suppose we make a cut through the network, and we realize that–
  152. so a cut would divide the network in half so that s is in one half and t is
  153. in the other half.
  154. And we realize that all the current flowing from s to t would have to
  155. cross that cut.
  156. What we would like to do is to find the cut, that so-called min cut, the
  157. minimum cut.
  158. And what we’d like to do is to minimize–
  159. OK, if we look at the capacity of all the cut edges, what we’d like to do is
  160. to sum up that capacity, and we’d like to minimize that sum.
  161. So if we find the min cut through the network, that will actually tell us
  162. the maximum flow that can go from s to t.
  163. As you can see, there’s a lot of possible ways to make cuts in this
  164. network, and to do so without examining every possible cut is sort
  165. of a tricky business.
  166. As a final example, consider the problem a traveling salesman faces.
  167. He has a territory with cities with rail lines connecting the cities.
  168. He would like to find the minimum cost trip that’s subject to the following
  169. constraints.
  170. He’d like to visit each city exactly once, and at the end of the journey,
  171. he would like to return home.
  172. So this is one of the classic graph problems.
  173. So the vertices represent cities, the lines represent the opportunities to
  174. travel from one city to another, and the goal here is to find a path now
  175. through this graph of cities that visits each city exactly once and
  176. returns to the start.
  177. Now the question is is that the minimum cost, such trip.
  178. Can we figure out a way to do this that doesn’t involve enumerating every
  179. possible trip?
  180. The challenge of optimization problems is that they are hard to solve.
  181. Hard in the sense that finding the solution you’re looking for, the
  182. optimal solution, requires examining all the possible
  183. combinations of items.
  184. So that’s why this field is often given the name combinatorial, so
  185. studying of all the different combinations, optimization, because
  186. we’re going to then look through each of the possible combinations and see
  187. if it’s the one that is the best solution to the problem that we have.
  188. The reason it’s hard is that the time to examine all the combinations of a
  189. set of items grows exponentially with the number of items.
  190. And what I mean by that is if I go ahead and add a single item to the
  191. collection, if I think about now all the possible combinations of items,
  192. what I will discover is that I will have a doubled the number of
  193. combinations.
  194. And so as I go from 1 to 2 to 4, the problem is getting exponentially
  195. harder, twice as hard each time.
  196. And the reason that’s a problem is that real world challenges often have
  197. a very large number of items that you need to consider.
  198. So as the problem is growing exponentially more difficult with each
  199. item, we can see that if we have 50 items that the problem may, in fact,
  200. take us too long to solve using the exhaustive techniques of looking at
  201. each possible combination.
  202. So instead, we’re going to have to look around for approximate solutions
  203. to these problems.
  204. We give that the fancy computer science term of heuristics, something
  205. that will let us not examine all the solutions but come up with a solution
  206. more quickly.


MIT Computer Science (Robot)
January 9, 2013, 12:54 am
Filed under: Uncategorized
  1. RUSS TEDRAKE: Hi there.
  2. I’m Russ Tedrake.
  3. I’m a professor of Electrical Engineering and Computer
  4. Science here at MIT.
  5. I want to talk today about how you program robots–
  6. maybe cool robots, like robotic birds, right?
  7. Let me show you why I care, right?
  8. So I like building software.
  9. I’ve always liked building software.
  10. But now I like to build software that comes alive.
  11. What do I mean by coming alive, right?
  12. I’ve been building these robotic birds with my research group
  13. the last few years.
  14. This is our two- meter wingspan autonomous ornithopter.
  15. It flies around MIT campus.
  16. It’s got a Linux box in it’s belly and huge lithium polymer
  17. battery in its nose.
  18. And it had a beak, but we broke that off in one of our crashes.
  19. But now we’ve got this bird flying around, being driven by relatively
  20. sophisticated programs running in its belly.
  21. That’s one example, right?
  22. We’ve been working on a bunch of robotic birds.
  23. This is a project where we tried to make an airplane that could land on a
  24. perch like a bird.
  25. And in fact, although that’s a very hard problem, because we have to stall
  26. our wings, we’ve now got airplanes that can fly through post-stall
  27. maneuvers and land on a perch just like a bird.
  28. Our newest project– we’re trying to build a robotic ostrich that can
  29. actually run bipedally at 50 miles an hour, right?
  30. So this is a simulation of a robotic ostrich that we’ve been building in
  31. collaboration with Jerry Pratt in Florida.
  32. And the reason I think we can do this is because we’ve got this fantastic
  33. mechanical design in the leg that runs almost without any software, and then
  34. our software just has to do a little bit of work to make it run really like
  35. an ostrich.
  36. And we’re on to building humanoids next.
  37. So let me talk about what it would take to build software using what
  38. you’ve learned in this course to build something as complicated as a robotic
  39. bird or robotic ostrich.
  40. So what do you have to do to program a robotic ostrich?
  41. Well, first of all, you really do have to understand physics.
  42. It’s not a computer science only enterprise.
  43. You have to understand the way physics interacts with your software.
  44. You definitely need incredible mechanical design to build advanced
  45. robots like this.
  46. But really, at the heart of it, I think the reason you haven’t seen a
  47. lot of robots that move with the grace and efficiency of animals is because
  48. we haven’t figured out the software yet.
  49. It’s really a software problem.
  50. So what’s involved with taking something–
  51. what’s different about programming a robotic ostrich than programming
  52. something you’ve done just on the computer?
  53. Well, robots are fantastic in that they have to interact with the world,
  54. and they have the ability to affect the world, right?
  55. So the problem is, when you’re writing software that interacts with the real
  56. world, everything gets harder, right?
  57. So first of all, you really need to make decisions all the time.
  58. Your computer programs using noisy sensor data that’s pulling off real
  59. sensors measuring real things in the world, right?
  60. So suddenly you have to deal with a lot of the statistics, like you’ve
  61. learned, of the sensor data, right?
  62. You have to do all your computations with very hard, real time
  63. constraints, right?
  64. So if I take too long to think about where I’ve got to put my foot next in
  65. my software loop, then the robot falls down.
  66. I’ve got serious hard, real time constraints.
  67. And I’ve got my software running in a feedback loop, which means there’s a
  68. very delicate interaction between the decisions I make and
  69. the physical robot.
  70. And these things, running back and forth, can quickly get out of control,
  71. or do exactly what you want them to do, right?
  72. So unlike simple programs that run just on your computer, no program on a
  73. robot ever runs the same way twice.
  74. Every single time you run the robot, it’s in a different situation, a
  75. different set of conditions, and the program runs completely differently.
  76. So you have to really figure out how to build robust software.
  77. So let’s formulate a problem for building software for robotic ostrich.
  78. So for our robotic ostrich model here, we formulate the problem of trying to
  79. find the fastest periodic–
  80. perfectly periodic repetitive gait, which is a solution to the
  81. differential equations of the robot.
  82. But it’s got some constraints on it.
  83. We don’t want just any periodic solution.
  84. We need one that’s stable, meaning it won’t fall down if I start it in a
  85. slightly different initial condition.
  86. It’s still going to run like this gait that we programmed.
  87. And it has to be robust, not just to changes in initial conditions, but if
  88. there’s some screw that’s falling loose in the foot, or if the terrain
  89. is not quite as flat as we modelled originally, then we really need it to
  90. be robust like this.
  91. So this is actually an optimization problem, just like you’ve been
  92. learning in class.
  93. It’s a constrained optimization where stability and robustness are given as
  94. constraints, and the objective is the speed.
  95. I want to go as fast as possible given what my motors and
  96. my physics can allow.
  97. Unfortunately, it’s a very hard optimization problem.
  98. So we have to use more tools that you’ve learned from class in order to
  99. find even any solution of a periodic–
  100. solution of these kind of problems.
  101. So we often start with search algorithms.
  102. And in fact, we even use what we talked about in class, using the idea
  103. of randomness, even to solve non-random problems.
  104. Let me give you a toy example.
  105. Imagine I have a silly robot, a wheeled robot, that’s
  106. stuck in this room.
  107. These are the walls of the room.
  108. And we’re trying to drive it to this goal location outside of the room.
  109. And my robot has to somehow figure out the geometry of the room, that has to
  110. go through this little narrow corridor in order to get over to the goal.
  111. Now, that seems potentially hard.
  112. But in fact, you can solve it very nicely with random algorithms, which
  113. build these fractal-looking trees to quickly explore the space.
  114. And eventually, with probability one, even for very, very hard geometric
  115. problems, they will find a solution that will eventually get me to the
  116. goal and find a path to the target.
  117. You can see it’s not a very good path.
  118. It’s jumpy and everything, so we’re going to use optimization of second to
  119. clean it up.
  120. But what’s surprising is, even though it can work for these very simple
  121. robots, it can also work for much more complicated robots, where the
  122. constraints are not just walls in the environment, but
  123. dynamics of the world.
  124. So here’s the same algorithm now, running on a robotic dog, with very
  125. strict limitations in the motors that it can provide, and difficult
  126. geometric constraints trying to overcome this rough terrain.
  127. And the same sort of search algorithms can find solutions where
  128. this dog can now bound.
  129. Maybe not as well as your dog at home, but it can bound over this rough
  130. terrain just using these randomized search algorithms.
  131. But like I said, they’re not perfectly optimized solutions.
  132. So after that, we put optimization back in.
  133. And we say, we’re not going to find just any solution.
  134. We’d like to find the best solution–
  135. best in some metric like speed–
  136. efficiency, right?
  137. And then we get these nice, smooth gaits of our robotic ostrich running
  138. across on flat terrain.
  139. And we could make the robotic ostrich run on intermittent terrain, over
  140. steps, things like that, too– using the same tools.
  141. OK, so you can apply this sort of thinking to a lot of different cool
  142. robots, right?
  143. So we’ve actually got another project where we’re taking–
  144. we have a collaborator that’s got pigeons that fly
  145. through obstacle courses.
  146. We get them to put cameras on the pigeon’s head and motion capture
  147. markers on the back, and we’re learning about the strategies that
  148. birds take when they’re navigating very cluttered environments.
  149. We take that information and now formulate new optimization problems.
  150. And we’ve built our plane, which is supposed to try to do just as well as
  151. the bird does flying through forest at very high speed, using cameras on
  152. board the plane to do the navigation.
  153. This is a seriously hard problem.
  154. It turns out using the same prescriptions of optimization and
  155. search, we can design these pretty fun aircrafts that’ll shoot through our
  156. synthetic forests here, with our PVC polls being the trees, at very high
  157. speeds– this is slowed down eight times as you can see– sixteen times–
  158. doing extremely accurate maneuvers to do a knife edge through this
  159. particular scenario.
  160. Here’s another set of trees they can fly through.
  161. And if you program it right, with stability and robustness in mind, then
  162. it’s extremely repeatable.
  163. So in summary, I wanted to tell you today that robotics research in
  164. academia, in industry, and practice uses a lot of the sophisticated
  165. algorithms you learned about in the course.
  166. Some of the ideas of search, of optimization, of statistical thinking,
  167. and making decisions based on noisy measurement data–
  168. and in particular, there’s a surprisingly strong influence of this
  169. idea of using randomized algorithms in order to solve non-random problems,
  170. even of a deterministic robot in a known world.
  171. Still, oftentimes our best way to solve this is with randomized
  172. algorithms.
  173. These are all contributing in a big way to robotics research today.
  174. And in fact, there’s lots of algorithms that are yet to be invented
  175. that are going to make the next wave of robots move just as gracefully as
  176. an ostrich or a bird or a humanoid.
  177. So I hope you come and join us in the quest for building better robots, and
  178. use your software skills to help.
  179. Thanks.


MIT Computer Science
January 9, 2013, 12:52 am
Filed under: Uncategorized
  1. COLLIN M. STULTZ: So my name is Collin Stultz.
  2. I’m a faculty member at MIT in the Department of Electrical Engineering
  3. and Computer Science, and also in the division of Health Sciences and
  4. Technology.
  5. I am also a clinical cardiologist who still sees patients.
  6. And I have an interest in combining computational methods with medicine to
  7. make statements about patients, statements that will help to improve
  8. the lives and quality of people who are sick.
  9. And today, I’m going to talk about some methods that we’ve been working
  10. on to combine sophisticated computational algorithms to make
  11. statements about patients with cardiovascular disease.
  12. Cardiovascular disease is a big problem.
  13. In fact, everybody has atherosclerosis.
  14. It doesn’t matter how young or how old you are.
  15. Atherosclerosis is actually a disease of the juvenile.
  16. And the problem is that, although everyone has it, some people will die
  17. from it and some people won’t.
  18. So a huge challenge in clinical cardiology is to identify those
  19. patients who have atherosclerotic heart disease who are
  20. going to die from it.
  21. And this slide demonstrates that atherosclerosis is a huge problem.
  22. About every 30 seconds in the United States, somebody dies of heart attack.
  23. And about half a million people die each year.
  24. So it’s about a quarter of all deaths in the US.
  25. And these are statistics from just a few years ago.
  26. And although we know that everybody has atherosclerosis–
  27. we know a lot about the disease–
  28. we’re horrible in trying to identify those who will die.
  29. So if you look at the whole pie of people who die from atherosclerotic
  30. heart disease, we only can identify a small fraction of those who will.
  31. So our goal, at least one of the goals of my research group, is to develop
  32. predictive models that help our ability to identify patients with high
  33. risk of death.
  34. So when you think about atherosclerosis, what really happens
  35. when somebody has a heart attack?
  36. Well, here in this picture is the classic photo of somebody having chest
  37. pain due to heart disease–
  38. clutching his chest.
  39. It usually feels like a pressure that comes down on your chest.
  40. It’s typically described as an elephant sitting on your chest.
  41. And this type of event is correlated with something that has been very well
  42. described in the literature.
  43. A lot of studies have been done to say what causes this type of pain that
  44. leads to adverse events.
  45. So if you were able to open this man’s chest to decipher the actual things
  46. that are going on, you’ll find that here’s the heart.
  47. And this is a blood vessel which supplies blood to a
  48. region of the heart.
  49. And in a heart attack, this blood vessel becomes blocked.
  50. You have this thing which builds up within the lumen of the vessel called
  51. an atherosclerotic plaque.
  52. So this heart muscle normally was being profused by blood, but this
  53. blockage happens.
  54. And it grows all of a sudden.
  55. And this blood vessel can no longer supply blood to
  56. that area of the heart.
  57. And consequently, that heart muscle dies.
  58. And if you were to take a blow up of this picture, this is an actual
  59. specimen taken from somebody who died of a massive heart attack.
  60. Here is the atherosclerotic plaque.
  61. And what happens is it this plaque ruptures.
  62. And the blood, which sits in the lumen, comes into contact with all of
  63. this bad stuff that’s in the atherosclerotic
  64. plaque, and a clot forms.
  65. And this process of breaking open the plaque involves lots of
  66. inflammatory cells.
  67. So these are cells which are involved in an immune response.
  68. And these cells secrete substances which break open the plaque.
  69. There’s a protective covering of collagen on top of this
  70. atherosclerotic plaque that becomes ruptured because these cells secrete
  71. enzymes that degrade the collagen.
  72. And that’s what allows the blood to come into contact with all of these
  73. bad materials with an atherosclerotic plaque, which leads to the formation
  74. of this clot.
  75. So here’s where computation could come into play.
  76. All of this was determined by biological methods, experiments that
  77. are typically done in a laboratory with test tubes and gels and so forth.
  78. But the events that happen at this level, the enzyme that actually breaks
  79. down collagen, these involve very, very small things that are typically
  80. very hard to study experimentally.
  81. But what you can do with a computer is you can model these proteins.
  82. So we know what the structure of collagen looks like.
  83. We know the structure of the enzymes that degrade collagen.
  84. And we can put this molecule in a computer.
  85. We can do sophisticated calculations to get estimates of how this molecule
  86. moves and how it gets degraded.
  87. And we can get insights into the motions of this protein that lead to
  88. its degradation, which leads to rupture of this plaque, which leads to
  89. formation of the clot, which leads to a heart attack.
  90. And so our group has been involved in doing a lot of sophisticated
  91. computations along this line.
  92. But that’s really not what I’m going to talk about today.
  93. I’m going to talk about something that’s a lot bigger.
  94. This is just an example to show you one avenue in which sophisticated
  95. computational tools can be used to gain insights into the disease process
  96. known as atherosclerosis and atherosclerotic plaque rupture.
  97. So now, what I’m going to talk about– if you go back to this mature
  98. gentleman who’s suffering from chest pain, this person, let’s say, goes
  99. into an emergency room or goes to his doctor.
  100. And the physician gets a lot of information about this patient.
  101. They can get electrocardiographic information.
  102. So that’s what this is.
  103. And if you’re familiar with shows like ER or Hospital, people come into the
  104. emergency room.
  105. There are leads that are put on the patient’s chest.
  106. And what’s recorded is an electrocardiogram.
  107. So these are the same tracings that you may be familiar with from watching
  108. popular television shows.
  109. You can get tracings that record blood pressure, how the blood pressure
  110. changes over time, and different tracings that record pressures within
  111. various parts of the vascular tree.
  112. So now, this electrocardiogram–
  113. maybe a lot of you have had these done before or know somebody who’s had them
  114. done before.
  115. It’s very easy to obtain.
  116. And the question that we’re posing is, “Can we use data like these that are
  117. easy to obtain, that are obtained on a lot of people, to try to identify
  118. whether this person is going to die or not?”
  119. He may be having a heart attack, but he can survive his heart attack and
  120. live the rest of his life in a productive manner.
  121. Or he could have a heart attack and die.
  122. And we want to identify those patients who have high risk of death after a
  123. heart attack.
  124. So as I’ve said, lots of information can be recorded for my gentleman when
  125. he’s well and when he’s sick.
  126. And this stuff is cheap, easy to obtain, and we want to see how we can
  127. use this to improve our ability to identify patients who die.
  128. So again, we’re interested in using easily obtained data that are low-
  129. cost to improve our ability to identify patients of high risk of what
  130. we call adverse cardiovascular events, of which the worst is death.
  131. Now, one thing to note about the surface electrocardiogram–
  132. it encodes a lot of information.
  133. And this is a normal ECG strip, ECG being electrocardiogram.
  134. This, what I’m circling here, is one beat.
  135. It constitutes one beat of the heart.
  136. And what you notice is that there are three peaks associated with the
  137. electrocardiogram.
  138. There’s what we call the P wave, a QRS complex, and a T wave.
  139. And it repeats in a quasi-periodic fashion, pretty regular.
  140. Now, a healthy electrocardiogram tells you a lot about the patient.
  141. It tells you that the patient’s myocardium, the heart muscle, is for
  142. the most part normal.
  143. The electrocardiogram records electrochemical impulses as they
  144. traverse through the heart.
  145. And you need a normal myocardium, normal heart muscle, to have a normal
  146. electrocardiogram.
  147. The heart, however, is not by itself.
  148. It’s not an island by itself.
  149. It interacts with other things.
  150. And there’s the nervous system, which sends nerves to the heart, which helps
  151. to innervate the heart.
  152. So the heart responds to nervous stimulation.
  153. The classic example is the flight or fight response.
  154. If you’re running away from something that you’re scared of, or if you
  155. become nervous at times, your heart starts to be faster.
  156. That’s because your nervous system is telling your heart to do so.
  157. So a normal electrocardiogram tells you about the heart rate, tells you
  158. about the interaction of the nervous system with the heart, and it tells
  159. you about the myocardium.
  160. We call it the endogenous factors–
  161. the myocardium–
  162. because that’s intrinsic to the heart itself, and exogenous factors–
  163. interactions with the nervous system.
  164. And the nervous system is a combination of two types of systems
  165. that’s not too important for this discussion, but are called the
  166. sympathetic and the parasympathetic system.
  167. So again, when a cardiologist gets an electrocardiogram, he or she looks at
  168. it in a gross fashion and, to assess the health of the myocardium, the
  169. endogenous factors.
  170. And you can look at the heart rate as well from the electrocardiogram,
  171. variations in the heart rate to determine the health of exogenous
  172. factors, and how that interacts with the heart and how those exogenous
  173. factors affect the heart.
  174. Now, there are a number of automated approaches to try to analyze an
  175. electrocardiogram to determine who’s at risk of bad events and who’s not.
  176. But a lot of those automated methods only focus on looking at changes in
  177. the heart rate, only really look at the exogenous factors.
  178. So our hypothesis is that we can use computers to evaluate the
  179. electrocardiogram to look at both endogenous and exogenous factors to
  180. get an overall picture of the health of the patient, and thereby improve
  181. risk stratification for cardiovascular death, CVD for short.
  182. Now, a cardiologist–
  183. even the most well-trained cardiologist–
  184. looks at an electrocardiogram, but it’s very hard for he or she to
  185. determine very subtle differences between beats, very subtle
  186. abnormalities.
  187. We’re very good at detecting big things–
  188. so if there are large changes in the QRS complex, large changes in T wave,
  189. or the parts of the beat.
  190. But very small change that might be there, it’s very easy to miss
  191. with the naked eye.
  192. But that’s the sort of thing that computers do exceptionally well.
  193. Computers can look at beats and can quantify the differences in them, even
  194. though the differences can be quite minuscule and hard to detect by the
  195. naked eye, even by a specialist.
  196. So we want to develop metrics to quantify subtle variability in the
  197. morphology of signals, and how beats change over time.
  198. And the hypothesis is that beats that change a lot over time indicate an
  199. unstable system and are associated with patients who are at high risk of
  200. cardiovascular death.
  201. So in this work, which parenthetically is pioneered by myself, John Guttag,
  202. who’s a colleague of mine, in the Computer Science, Electrical
  203. Engineering and Computer Science Department–
  204. and Professor Zeeshan Syed at the University of Michigan.
  205. So if you take a normal electrocardiogram–
  206. and remember, the goal now is to detect subtle variations in the
  207. electrocardiogram over time–
  208. we pre-process it to remove bad beats, beats that are really abnormal, that
  209. maybe are caused by noise or other sorts of other things that really can
  210. cloud our ability to make rigorous statements about the patient.
  211. Then, for every pair of beats within the ECG, we can align them and compute
  212. what we call the morphology differences between
  213. consecutive heart beats.
  214. And from that, we get what’s called a Morphology Difference Time Series.
  215. So for every pair of beats, we compute the differences, and
  216. we get a time series.
  217. And then we summarize the variability in this time series to get what we
  218. call the morphologic variability measure.
  219. Now, the key aspect of this whole approach is this.
  220. It’s a rigorous method to compute differences in the morphology between
  221. consecutive beats.
  222. So how do you do that?
  223. Well, if you have– this is one beat.
  224. Remember, from one of the previous slides, a beat is composed of several
  225. different peaks.
  226. There’s a P wave, a QRS complex, and a T wave.
  227. So let’s say these are two beats that occur in the patient.
  228. The beats can vary by very subtle means as a ECG strip is being
  229. recorded, as the data are being obtained.
  230. If you want to take just two beats and line them up with one another, you can
  231. take a look at this beat, look at all of the samples associated with this
  232. signal, look at all the samples associated with this signal, line them
  233. up, and just subtract them to get a difference between those beats.
  234. But there’s a problem with that.
  235. If the beats vary by a lot in terms of their length, then you may end up
  236. pairing up this portion of the T wave at the region of the beat that doesn’t
  237. correspond to the same physiologic event.
  238. The T wave corresponds to something very specific, as does the QRS and as
  239. does the P wave.
  240. So really, when you align beats and when you compare them, you want to
  241. make sure that you’re comparing the same physiologic things.
  242. So there is an algorithm that we use called dynamic time warping to align
  243. these beats, to ensure that they correspond to the
  244. same physiologic event.
  245. Similar algorithms are used in the area of computational biology to align
  246. sequences, sequences of amino acids, sequences of
  247. nucleic acids, DNA, protein.
  248. And really, essentially, you just want to make sure that you’re comparing
  249. apples with apples, T waves with a T wave, QRS with a QRS, and a P wave
  250. with a P wave.
  251. And then, once this pairing is done using this dynamic time warping
  252. algorithm, we can subtract the differences and get
  253. a meaningful result.
  254. So quickly, to summarize–
  255. we have our electrocardiographic strip.
  256. We take two beats, we line them up, we perform dynamic time warping akin to a
  257. sequence alignment– what is done in computational biology–
  258. and we get a number.
  259. We look at the next pair.
  260. We do the same sort of comparison.
  261. We line them up and we get another number.
  262. Similarly, we do this for the next sequence of beats.
  263. We get another number.
  264. And we get this Morphologic Difference Time Series.
  265. So this is time on the x-axis and this is the value on the y-axis that
  266. correspond to these differences.
  267. So lots of variation here in this signal means that there’s lots of
  268. changes in the heartbeat over time.
  269. Even though these changes might be subtle, they’ll be captured, because
  270. computers are very good at computing these differences
  271. and doing the alignment.
  272. And so then from that, we can use this time series to compute measures of the
  273. variability of the underlying signal.
  274. So again, Morphologic Distance Time Series captures both differences in
  275. the heart rate, because different heart rates means the length of the
  276. beats are different.
  277. And that will give you a nonzero difference when you align the beats
  278. and when you compute the distance measure between them.
  279. And so both in the heart rate, as well as the morphology, even if the heart
  280. rate is constant and the lengths of the beats are the same, if there are
  281. morphology differences–
  282. So if the T wave looks slightly different one beat to another, the QRS
  283. complex looks slightly different one beat to another–
  284. it’ll be captured in this Morphologic Distance Time Series as well.
  285. So it encapsulates both the exogenous and endogenous factors
  286. that affect the heart.
  287. It’s something that we stated that we wanted to address at the
  288. outset of this talk.
  289. Now, myocardial ischemia–
  290. by ischemia, I’m referring to what I talked about in one of the very
  291. earlier slides–
  292. when you have a vessel that supplies the heart, there’s a blockage that
  293. develops in that vessel, and then the heart is denied blood.
  294. That’s what ischemia means– just denying blood to the myocardium, to
  295. the heart muscle, leads to abnormal repolarization, abnormal electrical
  296. conduction.
  297. And it can cause subtle changes in the electrocardiogram.
  298. So these are the endogenous and the exogenous factors.
  299. This is what a typical Morphologic Distance Time Series looks like.
  300. So this is time and this is the dynamic time warping morphology
  301. difference.
  302. Looks like a mess, right?
  303. It’s very hard for any person to look at this and be able to say with any
  304. certainty whether this patient is at high risk or not.
  305. Now, with the computer, what we do is we take a look at this and we can
  306. quantify this in a very straightforward manner.
  307. We can do this both in the time domain, or we can look at the power
  308. spectrum, do a Fourier transform, and look at in the frequency domain to
  309. develop various metrics.
  310. So how do we develop this measure?
  311. So we have a list of patients from a database.
  312. We have a database of patients who all had heart attacks.
  313. And some of them died, some of them did not.
  314. And they all had Holter monitors, which are devices that can record your
  315. electrocardiogram for a long period of time.
  316. And we used this database to be able to develop our methods.
  317. So when you look at the Morphologic Distance Time Series and then you look
  318. at it in the frequency domain, is there a particular frequency band that
  319. carries prognostic information?
  320. So what I mean by that is you have your time series signal, you transform
  321. that to the frequency domain, and you have a list of frequencies and you
  322. have the power spectrum.
  323. So is there a particular range of frequencies that are more meaningful
  324. than others?
  325. So we sub the power in a particular frequency band to determine the
  326. patient’s risk.
  327. And we have to determine what band is appropriate.
  328. So we use this DISPERSE2 data set of these patients–
  329. some patients who died and some who did not die.
  330. But all of them have had a heart attack.
  331. And we use that to derive parameters for this model.
  332. We created a heat map where we looked at the prognostic information on all
  333. possible frequency bands.
  334. So here–
  335. this is the upper band limit.
  336. This the lower band limit.
  337. And the colors here is an assessment of the predictive value within that
  338. frequency band.
  339. This is a plot of–
  340. for those you who are familiar with it–
  341. the area under the curve from the receiver operating
  342. curve for this predictor.
  343. And we found a particular diagnostic frequency that was
  344. actually very useful.
  345. And once we developed this on DISPERSE, we took another set of
  346. thousands of patients–
  347. the MERLIN TIMI-36 trial.
  348. Again, this was a trial that compared a drug called Ranolazine to placebo.
  349. But for the purposes of this, we were really interested in just the patients
  350. who didn’t get a drug.
  351. All of these patients, again, had heart attacks.
  352. All of the them wore Holter monitors to record the
  353. electrocardiographic signals.
  354. Some died.
  355. Some didn’t.
  356. And we wanted to see if this metric that was developed on some other data
  357. set could be used to predict who would die in this data set.
  358. And sure enough, if you compute the Morphologic Distance Time Series, you
  359. sum the power with a particular frequency band, and you get a number
  360. for each patient.
  361. Those patients that have high numbers have lots of variability to the
  362. underlying signal.
  363. And those are the patients that we call high risk.
  364. Those patients that have a low value for this measure, we
  365. say are at low risk.
  366. And if you look here, this is called the Kaplan-Meier Curve.
  367. And this is the percent of people who died over time over the
  368. span of about a year.
  369. The red is a death rate for patients who the algorithms
  370. says are at high risk.
  371. And this is the death rate for patients that we say had low risk.
  372. So what you see is that, sure enough, if the algorithm says that you have
  373. lots of variability in electrocardiographic signal, you’re at
  374. a threefold increased risk of death.
  375. And this is a P value which shows that these differences–
  376. the statement here that’s important is that this is statistically
  377. significant.
  378. So this is really not by chance.
  379. There’s something really here.
  380. And even if you look at different subgroups of patients– so if you look
  381. at patients who are mature– those are greater than 65, those that
  382. are less than 65–
  383. in all these groups, if the algorithm says that you have lots of variability
  384. to the underlying electrocardiographic signal, you are at an increased risk
  385. of death, even a fourfold risk of death if you don’t have high blood
  386. pressure, threefold risk of death if you do, and so forth.
  387. So it works in all patient populations that we’ve looked at thus far.
  388. And if you look at this and you compare this to other sorts of
  389. metrics, other sorts of things that exist in the literature for predicting
  390. who’s at high risk of death and who isn’t–
  391. measures called heart rate variability, deceleration capacity,
  392. and so forth, as well as other metrics that are used in clinical practice–
  393. of the electrocardiograph measures, the morphologic variability does quite
  394. well in terms of the hazard ratios.
  395. So it says that you’re about a twofold increased risk even after you control
  396. for all of these other metrics.
  397. So it provides added value relative to these other metrics.
  398. And you look at the ones– and we’ve just pointed out a few here that are
  399. used in clinical practice to show that it is on par and it provides added
  400. information to these other known things that are used to quantify
  401. patients’ risk.
  402. So in conclusion, I think the one upshot to get from here is that
  403. electrocardiographic information is easy to obtain and it
  404. has prognostic value.
  405. It just requires sophisticated tools to do so.
  406. So one method–
  407. to talk about this in a broader framework–
  408. we want to get information about a patient, evaluate the patient using
  409. current and past information, and choose an intervention.
  410. In clinical cardiology, as in many areas of medicine, this is typically
  411. the paradigm that’s followed.
  412. And you perform an intervention that’s going to affect the patient.
  413. Now, evaluation of the patient is perhaps the weakest link.
  414. And by evaluation, I mean looking at the patient and trying to determine
  415. whether that patient is in trouble or not, getting sense of how sick that
  416. patient is.
  417. And techniques from electrical engineering, computer science,
  418. medicine, and physics and such can help in this endeavor.
  419. So one other thing–
  420. there exists a lot of information in physiologic signals that’s
  421. unappreciated, physiologic signals that are cheap and easy to obtain, and
  422. can be gathered in a lot of different settings.
  423. So we believe that by gathering all of this information, you can make better
  424. inferences about patients so that you can identify those patients who are at
  425. high risk of having adverse events and treat them accordingly.
  426. Thank you.


MIT Computer Science
January 9, 2013, 12:51 am
Filed under: Uncategorized
  1. DANA MOSHKOVITZ: Hi.
  2. My name is Dana Moshkovitz.
  3. I’m in the Electrical Engineering and Computer Science Department of MIT.
  4. I’m going to talk to you about a very interesting brand of research in
  5. theoretical computer science.
  6. This is about randomness and mazes and computation and GPS’s and robots and
  7. all kinds of stuff.
  8. So you may have noticed that I have a funny accent.
  9. That’s because I’m Israeli.
  10. I once did this internet questionnaire, what American
  11. accent do you have?
  12. They ask you questions like, Mary, merry, and marry–
  13. do they sound the same or not?
  14. And you answer those things.
  15. In the end, they tell you you’re from whatever this region.
  16. And this is what I get when I do this questionnaire.
  17. I’m from the Midland.
  18. I’m from the mid-USA, although not sure exactly where–
  19. Pennsylvania, Ohio, or something like that.
  20. What they are sure of is that I have a great voice for television and radio.
  21. So this is what you get for taking career advice from internet
  22. questionnaires.
  23. You may have also noticed this.
  24. I’m in my ninth month of pregnancy.
  25. It’s a girl.
  26. She’s my first.
  27. She’s very excited about her first pre-birth appearance.
  28. She’s been kicking hard.
  29. Let’s start.
  30. And we’re going to start with talking about mazes.
  31. Mazes, somehow, are an important part of the computer science curriculum.
  32. We feel that it’s our responsibility to teach you how to get to the end of
  33. mazes, if you ever are put in a maze.
  34. So this was also part of the 6.00x curriculum.
  35. And the reason is that you can think of the intersections in the maze as
  36. vertices of a graph.
  37. And the corridors between intersections are edges in a graph.
  38. And then all you have to do is to find your way from some start vertex to an
  39. end vertex.
  40. And in 6.00x, you studied the two basic algorithms for doing that–
  41. breadth-first search and depth-first search–
  42. BFS and DFS.
  43. And you can think of both these algorithms as algorithms that help you
  44. solve mazes when you have bread crumbs.
  45. So BFS is you have bread crumbs, and you mark where you started with one
  46. bread crumb.
  47. Then you mark the places that are reachable within one step
  48. by two bread crumbs.
  49. And then you mark those that are reachable in two steps but not in one
  50. step by three bread crumbs, and so on, and so on.
  51. And you go and mark your entire maze until you reach the end vertex, and
  52. then you are done,
  53. OK, so these will be good algorithms depending if you have bread crumbs.
  54. Now, why do we care so much about navigating mazes?
  55. Well, this is a pretty basic computational problem.
  56. For example, for GPS’s, you want to be able to reach from one
  57. place to the other.
  58. Also, if you design a robot, you may want to plan motion for your robot.
  59. And there are many other possible applications.
  60. In theoretical computer science, mazes actually correspond to something that
  61. we call nondeterminism in space-bounded computation.
  62. So many reasons to study mazes.
  63. What I want to do now is talk to you about a very simple algorithm for
  64. solving mazes.
  65. This is the random walk algorithm.
  66. So what’s the random walk algorithm?
  67. You are, say, a robot that is somewhere in the maze, and you have to
  68. decide what’s your next step.
  69. So what you do is you grab a die, and you toss it.
  70. And you find out, according to the die, where you should go next.
  71. So suppose that we did this, and the die chose uniformly at random one of
  72. the next possible steps and went there.
  73. So now continue.
  74. Just do it over and over and over each time.
  75. Use the die, get the next step, and then, hopefully, reach
  76. the end of the maze.
  77. So what do we think about this algorithm?
  78. Well, it’s very easy to implement, which is good.
  79. OK.
  80. It’s also super efficient if you care about memory usage.
  81. If you think about it with DFS and BFS, you had to mark the entire maze.
  82. That’s linear memory in the size of the maze.
  83. Here, you barely need to remember anything.
  84. You just need to remember where you are.
  85. Now, the worrying aspect of this algorithm is the time efficiency.
  86. So you do that over and over, but maybe you never reach the end.
  87. Maybe you reach the end, but it will take you a billion operations.
  88. So what’s going on with that?
  89. So I’ll tell you what’s going on with that.
  90. But before that, let me talk to you about randomness in computation and
  91. what it’s good for.
  92. So there are two conditions that make randomness useful for algorithms.
  93. So when this is useful, when you have many, many, many strategies.
  94. And here, the many strategies are the many possible walks in the maze.
  95. And most importantly, among all of those many strategies,
  96. most of them are good.
  97. Here, a good strategy is a strategy that goes through the end vertex.
  98. And one can prove mathematically that if you consider all of the possible
  99. tours of length n cubed when n is the size of the maze, then most of those
  100. tours are going to include the end vertex.
  101. And so we say that the time efficiency of this algorithm is n cubed.
  102. Is it good?
  103. Is it not good?
  104. Well, if n is huge– if you’re planning a GPS, and n is the number of
  105. road intersections in the US, you know, n cubed, not so great.
  106. You may reach San Francisco when you want to get from here to
  107. Main Street, Cambridge.
  108. Fine.
  109. However, if you’re in an amusement park, and n is the size of the maze in
  110. the amusement park, it’s actually a pretty nice thought that even if you
  111. walk just randomly, within n cubed steps, you’ll probably reach the end.
  112. You’re not going to be stuck there for life.
  113. So this is a great algorithm.
  114. So let me tell you what kind of questions a theoretician thinks about
  115. when they see this.
  116. So this is your theoretician.
  117. This is actually how we spend our days.
  118. And the first thing that we think, “Huh, when this was very useful for
  119. mazes, what about other problems?
  120. Is it useful for other problems?”
  121. And the second question is, “OK, so randomness may be useful, but do we
  122. actually need it?” Or maybe we can avoid using it.
  123. So the answer to the first question is yes, randomness is definitely useful
  124. for algorithms, it’s a very good perspective on algorithms.
  125. You saw several examples for that in 6.00x when he talked
  126. about Monte Carlo methods.
  127. The surprising answer here is the answer to the second question, whether
  128. randomness is really needed.
  129. And here, the answer is a qualified no.
  130. Randomness is not really needed.
  131. And for example, for solving mazes, in 2005, Omer Reingold came and showed
  132. that you can solve mazes with memory usage that is very similar to the
  133. memory usage of the random walk algorithm but do it with a
  134. deterministic algorithm.
  135. So this was a breakthrough result.
  136. But actually, much more is known than just this result about solving mazes.
  137. So in ’97, those two guys, Russell Impagliazzo and Avi Wigderson, showed
  138. that under a widely believed assumption, randomness may help.
  139. But it can only give you a polynomial speed-up for algorithms.
  140. So if you have input size n and running time t which a randomized
  141. algorithm, there must be a deterministic algorithm that runs in
  142. time polynomial in t and n and solve the same problem.
  143. And this is not just about time efficiency.
  144. It’s also about memory usage, only in memory usage, you care about the
  145. logarithmic scale, and then you can avoid assumptions for memory usage.
  146. There are many other theorems like that.
  147. However, we still have our work cut out for us theoreticians, because
  148. there are lots and lots of unanswered questions about this.
  149. So for example, what’s the relation between the assumption and the
  150. conclusion in the theorem?
  151. What’s the minimal assumption that you can get here?
  152. Or for which problems randomness helps and for which problems randomness
  153. doesn’t help.
  154. So all of these and more are the kind of questions that we ask ourselves,
  155. and we’re still looking for the answers.
  156. Thank you.


arsip diskusi ttg agama dan konsep…..
January 8, 2013, 3:49 am
Filed under: Uncategorized


Tentang Kantianisme…
December 27, 2012, 9:04 am
Filed under: Uncategorized
Wawan Setiawan
Untuk olahraga berpikir kita, setahu saya Kant memang mungkin akan menggabungkan rasio dan empiris, saya setuju itu, tapi apakah Kant bilang bahwa rasio/logika itu sebenarnya tersusun dari empiris?

jadi yg saya pertanyakan kepada para kaum Kantian adalah, menurut Kant, darimana asal usuk rasio atau logika? dari trancendent, atau dari empiris?

Like · · Unfollow Post · 5 hours ago