On 3/14/2017 7:45 PM, Robert Wessel wrote:>>>> [I don't expect even a remote server connected recognizer to >>>> correctly handle "Watch Alfred Hitchcock" if he's no longer >>>> on-the-air. Does your TV stop working when the server goes >>>> offline? Your network connection dies? Provider goes out >>>> of business? etc.] >>> >>> I dunno. "OK Google" on my phone immediately popped up a list of >>> Hitchcock videos and shows for that query. >>> >>> And yes, a large chunk of my TV service dies if my network connection >>> goes out. >> >> Without your internet connection, can you turn the TV on? off? change >> volume? command it to "watch CSI Miami"? > > Sure I can turn it on and off without an internet connection, butSo, the TV can recognize your speech WITHOUT the need for an external server/service.> unless CSI Miami happens to be on a Comcast channel at the moment (or > recorded from one earlier), I can't watch it. And very little (some > live events excepted) of our TV watching follows that pattern. > On-demand, Netflix, Hulu, etc. So TV is mostly dead for us without an > internet connection.Different issue, entirely -- having an internet connection to *source* material (vs. having it to provide a speech recognition *service*). TV could fetch the "catalogs" of each of those services "on demand" *or* ahead of time and present them to you or use them to resolve your unprompted requests. If you don't have a Hulu account, no need for it to examine Hulu's offerings, etc.>> The music we've loaded (from CD) into SWMBO's vehicle contains song >> titles, etc. I can say "Play Little Green Bag" and it will realize that >> there is a song having the title "Little Green Bag" on the internal disk >> drive and immediately start playing it. It might have a problem if I >> asked it to "play fish" and fish was spelled "ghoti" on the song title... >> >> If the (TV) guide listed "CSI Miami", I'd expect it to be accessible without >> requiring a server's assistance. If _The Day the Earth Stood Still_ was >> listed in the guide's *description* of the "Late Night Movie" (a regularly >> scheduled time slot that presents a "movie du jour"), I'd expect to be >> able to access it by saying "watch Day Earth Still". >> >> The computationally intensive part of the problem is converting sound >> to glyphs. The "search" algorithm beyond that is relatively trivial. > > No it's not - the sound-to-text part is solidly integrated with the > database of things that are actually said. The Google voice assistant > is particularly adept at letting you watch it change what it's "heard" > as you get further into you sentence.Implementation detail.>> Being able to say "watch old movie with big robot from outer space" would >> require completely different processing requirements and more general >> knowledge. >> >> [*Think* about it. With speech recognition local, could *you* >> search through a guide and come up with a likely match in these cases? >> Would that be a HARDER challenge than recognizing the speech itself?] > > No I couldn't,*Really*?? If a local service spit out a set of words that it recognized (with likelihoods) from an utterance, you couldn't examine a set of databases that you held, locally, to determine what the likely selection might be? It really *isn't* difficult. The problem lies in the actual recognition, not in the "mapping to meaning" aspect.> but that's why we have computers and databases and > whatnot - so we have *better* reference works than a TV Guide.
Language feature selection
Started by ●March 5, 2017
Reply by ●March 15, 20172017-03-15
Reply by ●March 15, 20172017-03-15
Hi Don, On Sun, 12 Mar 2017 17:25:43 -0700, Don Y <blockedofcourse@foo.invalid> wrote:>Assume applet, memory_manager_module, applet_parent. > >Applet_parent (for example) instantiates a memory object for applet. Part >of that requires binding parameters to that memory object (size, granularity, >allocation_policy, release_policy, etc.). Applet_parent need not persist >beyond this point (assume its job is done). > >Where do the bound parameters get stored? In the applet? In the memory >object? Obviously, can't be part of applet_parent because that won't >persist (unless you make other arrangements for it or portions of it). > >[I.e., applet_parent can't define an allocation handler in its body.] > >Now, migrate applet to another node. Another (shared) instance of >memory_manager_module likely already exists on that node. If the >allocation handler was a "stock" handler (contained in the >memory_manager_module), can you simply copy the state over to >the new node? > >What if the allocation handler was resident in some OTHER module >on the original node? Do you migrate that other module, as well? >Or, instantiate another copy if the module has outstanding references >on the first node??For one "applet" (process / security context) to allocate memory on behalf of another, that memory has to be supplied by something that exists outside of either of them. Thus the system memory manager can't be a dynamic "module" loaded locally into a process. Its *interface* could be, but not the manager itself. Leaving that aside. Ok, so you have a block of system memory, and you created one (or more) heaps having certain properties within the block - heaps which are "managed" locally in the process by code from a dynamic module. And now you want to rehost the process. 1) The obvious: store the heap properties together with the heap so that "copying" (serializing) the contents of the memory block to another host replicates the heap state for the equivalent "manager" module on that system. 2) Rehosting a running process necessarily requires checkpointing, copying or reconstructing its entire dynamic state: heaps, stacks, globals, loaded modules, kernel [meta]data, etc. - far more than one memory block. Every piece of distributed state must be identifiable as belonging to the process - regardless of what "module" may have created or is currently managing it. You have to copy all data belonging to the process, so having heap properties stored separately from the heap itself is not a problem. [Unless the systems involved do not have MMU/VMM such that the process address space can be faithfully reconstructed ... but if you really need to do it, it can be done using a software based virtual machine solution.]>Or, do you require declarations as to what *can* migrate and what >is bound in place? Then, contend with the consequences of >cascaded dependencies? > >The "consistent" solution is to make all of these first class >objects and let the OS manage them and their locations/instantiations. >But, that adds considerably to the cost of *using* them.Yes, that certainly is the ridiculous, overcompensating solution. The "consistent" solution is the one done already by any decent OS: keep a record of all the kernel structures and memory pages belonging to the process. Program objects then are just ordinary data in memory blocks "owned" by the process, or by the kernel on behalf of the process. Ensuring that a rehosted program will work is another matter. But as I understand (???) your system, there is location transparent naming and equivalence[*] hosts will expose the same set of service APIs. [*] capable of running the same applications.>Its a lot easier to deal with implementations where "everything" is >a cohesive "blob" instead of being able to slice-and-dice it at >run-time and redistribute it based on evolving workloads, resources, >etc.You can somewhat mitigate the problem by making large allocations piece wise: abusing VMM to make them appear address contiguous. E.g., if a process asks for 100MB, reserve the address space but instantiate just a few megabytes at a time, on demand as the process touches unbacked addresses. [Because (I assume) you don't want to overcommit memory, you need to reserve requested address space both locally in the process, and globally so that other processes can't accidentally grab it.] You need to reconstruct the address space on the new host, but you need only copy as much physical memory as the process is actually using. Aside: one of the nice features of "moving" GC is that it compacts live data into a (usually) smaller address range. You don't want to have to copy a large block to get at a much smaller volume of data contained within it. There are a number of tricks that involve using VMM to implement GC. You can't completely avoid data copying if you want to compact the live data, but using VMM techniques together with clustered BiBoP allocation, you can reduce the amount of copying to bare minimum. It also is possible to use VMM and just the tracing step of GC to identify the actual set of pages containing live data ... you don't need to copy any more than that even if the process has much more memory instantiated. And regardless of whether you (would want to) use GC in normal operation, if it is supported by the language/runtime, a compacting collection executed before rehosting a process would minimize both what needs to be copied and the physical memory that must be instantiated on the target.>Last 25# of oranges... yippee!10 inches of snow, followed by sleet, followed by freezing temps. The crust is ~3/4 of an inch thick - my 185 lbs can walk on top of it. Had to use the heavy coal shovel to break it up so it could be removed. George
Reply by ●March 16, 20172017-03-16
Hi George, On 3/15/2017 6:05 PM, George Neuner wrote:>> Assume applet, memory_manager_module, applet_parent. >> >> Applet_parent (for example) instantiates a memory object for applet. Part >> of that requires binding parameters to that memory object (size, granularity, >> allocation_policy, release_policy, etc.). Applet_parent need not persist >> beyond this point (assume its job is done). >> >> Where do the bound parameters get stored? In the applet? In the memory >> object? Obviously, can't be part of applet_parent because that won't >> persist (unless you make other arrangements for it or portions of it). >> >> [I.e., applet_parent can't define an allocation handler in its body.] >> >> Now, migrate applet to another node. Another (shared) instance of >> memory_manager_module likely already exists on that node. If the >> allocation handler was a "stock" handler (contained in the >> memory_manager_module), can you simply copy the state over to >> the new node? >> >> What if the allocation handler was resident in some OTHER module >> on the original node? Do you migrate that other module, as well? >> Or, instantiate another copy if the module has outstanding references >> on the first node?? > > For one "applet" (process / security context) to allocate memory on > behalf of another, that memory has to be supplied by something that > exists outside of either of them.Actually, the memory can exist anywhere: in some global space, in the "allocating entity" or in the "client entity". As can the code that does the actual allocating (at the behest of the allocator). [Consider client environment not supporting arbitrary pointers. Unless something *gives* you a reference/handle to a piece of memory, there's no way you can access its contents (short of an exploit). E.g., no way you can go looking up/down the stack even though you know there *is* a stack supporting the implementation language]> Thus the system memory manager can't be a dynamic "module" loaded > locally into a process. Its *interface* could be, but not the manager > itself. > > Leaving that aside. > > Ok, so you have a block of system memory, and you created one (or > more) heaps having certain properties within the block - heaps which > are "managed" locally in the process by code from a dynamic module. > And now you want to rehost the process.Objects (heaps in this example) are managed by <something>. That something may be the entity "owning" the object, a proxy or the OS acting as the "proxy of last resort". The actions involved in management may reside in the OS, a service (created by <something> -- including the client in question!) or the client. At the direct or *implied* request of the client, etc.> 1) The obvious: store the heap properties together with the heap so > that "copying" (serializing) the contents of the memory block to > another host replicates the heap state for the equivalent "manager" > module on that system.This potentially exposes those "trusted" parameters to abuse by the entiti(es) having access to the underlying object. E.g., if client can tweek the "heapsize" parameter stored there, then it can potentially trick the "manager" on the remote system to instantiate a larger heap than it was originally allowed to create (unless you rerun the "create_heap" request as part of the migration effort) Likewise, if some of the "properties" reside *in* the client (e.g., the algorithm by which allocations will be performed -- under the supervision of some remote/system service that actually *does* them), then you have to drag that/those things along with you. [Imagine these things reside in a *third* entity]> 2) Rehosting a running process necessarily requires checkpointing, > copying or reconstructing its entire dynamic state: heaps, stacks, > globals, loaded modules, kernel [meta]data, etc. - far more than one > memory block. > > Every piece of distributed state must be identifiable as belonging to > the process - regardless of what "module" may have created or is > currently managing it. > > You have to copy all data belonging to the process, so having heap > properties stored separately from the heap itself is not a problem.In my "portable" case, this is handled by moving the object handles. The system can then opt to "optimize" execution by (later?) moving the actual object instances (to minimize communication delays or take better advantage of processing power on some particular node -- which may differ from the source or destination nodes) But, you can (currently!) create "non-portable" objects. And, objects that can't (easily) be shared. (e.g., for a "conventional" heap, you can let the default policies available in a "heap manager" govern the way the heap operates. *But*, you can't take advantage of any "enhanced capabilities" in much the same way that you can't in a more conventional process container, etc.) The problem with this (apparently arbitrary) distinction between portable/shareable objects and "legacy" variety implementations is that the developer has to explicitly decide what can be migrated, shared, and how (because the developer has to take extra steps at compile and link time to make those capabilities available). So, where these sorts of bindings get stored (and how they get tracked) varies based on this "other" information that the developer supplies. [You don't want to have to change the sources to support these abilities as you want to be able to experiment with letting them migrate, be shared, etc. without having to undertake a rewrite/refactoring. And, I can't see an easy way for the build tools to determine that without explicit directives from the developer (hence my related comments in this thread)]> [Unless the systems involved do not have MMU/VMM such that the process > address space can be faithfully reconstructed ... but if you really > need to do it, it can be done using a software based virtual machine > solution.] > >> Or, do you require declarations as to what *can* migrate and what >> is bound in place? Then, contend with the consequences of >> cascaded dependencies? >> >> The "consistent" solution is to make all of these first class >> objects and let the OS manage them and their locations/instantiations. >> But, that adds considerably to the cost of *using* them. > > Yes, that certainly is the ridiculous, overcompensating solution. > > The "consistent" solution is the one done already by any decent OS: > keep a record of all the kernel structures and memory pages belonging > to the process. > > Program objects then are just ordinary data in memory blocks "owned" > by the process, or by the kernel on behalf of the process.But that ignores all of the "other" dependencies that can be in play at any given time (i.e., my "solution" being these portable handles that the OS tracks for the application/developer). So, <something> knows that there is an aspect of the current applet's instantiation that relies upon something else *or* that is relied upon BY something else. [My approach lets the run-time know of these possible external references by the presence of object handles held -- or exported -- by the applet in question. *Absent* these, the task is just a block of memory and "processor state" that can be copied anywhere and "resumed".]> Ensuring that a rehosted program will work is another matter. But as > I understand (???) your system, there is location transparent naming > and equivalence[*] hosts will expose the same set of service APIs. > > [*] capable of running the same applications. > >> Its a lot easier to deal with implementations where "everything" is >> a cohesive "blob" instead of being able to slice-and-dice it at >> run-time and redistribute it based on evolving workloads, resources, >> etc. > > You can somewhat mitigate the problem by making large allocations > piece wise: abusing VMM to make them appear address contiguous. > > E.g., if a process asks for 100MB, reserve the address space but > instantiate just a few megabytes at a time, on demand as the process > touches unbacked addresses. > > [Because (I assume) you don't want to overcommit memory, you need to > reserve requested address space both locally in the process, and > globally so that other processes can't accidentally grab it.]Resource constraints vary with the resource and the consumer/provider. You can overcommit many resources because many "jobs" don't exploit their worst-case resource needs. But, you do so at the risk of delaying the availability of those resources to the job in question. Or, some other job interested in those resources. [E.g., if your job is diarising a previously recorded telephone conversation, it can *probably* afford to block waiting on memory that it needs to complete that task -- perhaps even indefinitely! It shouldn't be prevented from starting just because the MAX memory that MIGHT be required isn't presently available. Nor should all of that memory be wired down for its benefit without knowing that it *will* need it -- or, when!]> You need to reconstruct the address space on the new host, but you > need only copy as much physical memory as the process is actually > using. > > Aside: one of the nice features of "moving" GC is that it compacts > live data into a (usually) smaller address range. You don't want to > have to copy a large block to get at a much smaller volume of data > contained within it.But you need a way of tying a "handle" to the data involved -- and, later, resolving that reference.> There are a number of tricks that involve using VMM to implement GC. > You can't completely avoid data copying if you want to compact the > live data, but using VMM techniques together with clustered BiBoP > allocation, you can reduce the amount of copying to bare minimum. > > It also is possible to use VMM and just the tracing step of GC to > identify the actual set of pages containing live data ... you don't > need to copy any more than that even if the process has much more > memory instantiated. > > And regardless of whether you (would want to) use GC in normal > operation, if it is supported by the language/runtime, a compacting > collection executed before rehosting a process would minimize both > what needs to be copied and the physical memory that must be > instantiated on the target. > >> Last 25# of oranges... yippee! > > 10 inches of snow, followed by sleet, followed by freezing temps. TheYeah, caught a glimpse of that on the national news...> crust is ~3/4 of an inch thick - my 185 lbs can walk on top of it. Had > to use the heavy coal shovel to break it up so it could be removed.90's, here. I think 95 expected this weekend. Being outdoors is a chore as it's either too warm during the daylight hours *or* the air too "smelly" (everything is in bloom) in the cooler hours when the breeze has died down. I'd (personally) appreciate a cold spell (but not *now* as the new crop of fruit is setting in place) to slow the growth process. I suspect it will be a bad year for insects! [Normally, we don't see 95 until May; and 90 until mid-April]
Reply by ●March 17, 20172017-03-17
Tom Gardner wrote:> On 14/03/17 02:23, Paul Rubin wrote: >> Tom Gardner <spamjunk@blueyonder.co.uk> writes: >>> "When I was 14 I thought my father was an idiot. When I became 21 I >>> was amazed at how much he had learned in the past 7 years" >> >> One of the profs at my old school said at his retirement dinner "I've >> been teaching these kids freshman calculus for FORTY YEARS and they >> still don't get it!". > > :) > > We learned integration and differentials of polynomials > except 1/x for exams at 15. I can still visualise the > teacher taking a double period (80 mins) for each, > deriving the concepts from first principles. > > OK, he was a good teacher (but not a good mathematician, > he knew his limits!), but even so I've never understood > why people think calculus is inherently difficult. >Limits are conceptually difficult. It takes work. You have to be able to suspend disbelief in things like infinity. -- Les Cargill
Reply by ●March 17, 20172017-03-17
On 17/03/17 11:01, Les Cargill wrote:> Tom Gardner wrote: >> On 14/03/17 02:23, Paul Rubin wrote: >>> Tom Gardner <spamjunk@blueyonder.co.uk> writes: >>>> "When I was 14 I thought my father was an idiot. When I became 21 I >>>> was amazed at how much he had learned in the past 7 years" >>> >>> One of the profs at my old school said at his retirement dinner "I've >>> been teaching these kids freshman calculus for FORTY YEARS and they >>> still don't get it!". >> >> :) >> >> We learned integration and differentials of polynomials >> except 1/x for exams at 15. I can still visualise the >> teacher taking a double period (80 mins) for each, >> deriving the concepts from first principles. >> >> OK, he was a good teacher (but not a good mathematician, >> he knew his limits!), but even so I've never understood >> why people think calculus is inherently difficult. >> > > > Limits are conceptually difficult. It takes work. You > have to be able to suspend disbelief in things like infinity.Work? Yes, good :) Difficult? Yes, there is a graunch; It probably took 40mins of teaching and discussion.
Reply by ●March 17, 20172017-03-17
Tom Gardner wrote:> On 17/03/17 11:01, Les Cargill wrote: >> Tom Gardner wrote: >>> On 14/03/17 02:23, Paul Rubin wrote: >>>> Tom Gardner <spamjunk@blueyonder.co.uk> writes: >>>>> "When I was 14 I thought my father was an idiot. When I became 21 I >>>>> was amazed at how much he had learned in the past 7 years" >>>> >>>> One of the profs at my old school said at his retirement dinner "I've >>>> been teaching these kids freshman calculus for FORTY YEARS and they >>>> still don't get it!". >>> >>> :) >>> >>> We learned integration and differentials of polynomials >>> except 1/x for exams at 15. I can still visualise the >>> teacher taking a double period (80 mins) for each, >>> deriving the concepts from first principles. >>> >>> OK, he was a good teacher (but not a good mathematician, >>> he knew his limits!), but even so I've never understood >>> why people think calculus is inherently difficult. >>> >> >> >> Limits are conceptually difficult. It takes work. You >> have to be able to suspend disbelief in things like infinity. > > Work? Yes, good :) > > Difficult? Yes, there is a graunch; It probably took > 40mins of teaching and discussion.I remain highly skeptical of that figure. That was for the initial gloss; it takes a considerably longer time to gain a working knowledge of limits. -- Les Cargill
Reply by ●March 17, 20172017-03-17
On 17/03/17 12:16, Les Cargill wrote:> Tom Gardner wrote: >> On 17/03/17 11:01, Les Cargill wrote: >>> Tom Gardner wrote: >>>> On 14/03/17 02:23, Paul Rubin wrote: >>>>> Tom Gardner <spamjunk@blueyonder.co.uk> writes: >>>>>> "When I was 14 I thought my father was an idiot. When I became 21 I >>>>>> was amazed at how much he had learned in the past 7 years" >>>>> >>>>> One of the profs at my old school said at his retirement dinner "I've >>>>> been teaching these kids freshman calculus for FORTY YEARS and they >>>>> still don't get it!". >>>> >>>> :) >>>> >>>> We learned integration and differentials of polynomials >>>> except 1/x for exams at 15. I can still visualise the >>>> teacher taking a double period (80 mins) for each, >>>> deriving the concepts from first principles. >>>> >>>> OK, he was a good teacher (but not a good mathematician, >>>> he knew his limits!), but even so I've never understood >>>> why people think calculus is inherently difficult. >>>> >>> >>> >>> Limits are conceptually difficult. It takes work. You >>> have to be able to suspend disbelief in things like infinity. >> >> Work? Yes, good :) >> >> Difficult? Yes, there is a graunch; It probably took >> 40mins of teaching and discussion. > > > I remain highly skeptical of that figure. That was > for the initial gloss; it takes a considerably longer > time to gain a working knowledge of limits.It was sufficient for us to believe in the concepts, and to be able to understand the explanation of how polynomials (except 1/x) could be differentiated. We then went on to use differentiation for the O-level exams at 16. Well, I say 16 because that was the normal age, but everybody in my school (a selective state "Grammar" school) took maths and English language a year early, so that if we failed we could have another go :) Curiously, 15/20 years after that I was talking to a secondary school maths teacher who refused to believe we did calculus at O-level. Now it was an unusual syllabus (University of London Syllabus D), but I have since seen a copy of the textbook we used, and it did indeed have exactly what I remembered in it.
Reply by ●March 17, 20172017-03-17
Tom Gardner <spamjunk@blueyonder.co.uk> writes:> It was sufficient for us to believe in the concepts, and to be able to > understand the explanation of how polynomials (except 1/x) could be > differentiated.I think that might have been a less rigorous treatment than is usually seen in calculus classes where students get confused by limits. Here's a cute problem that can let you check your understanding. It's about continuity rather than limits per se, but if you understand one you probably understand the other. Consider the function f(x) defined as follows: if x is a rational number p/q in lowest terms, then f(x) = 1/q. if x is irrational, then f(x) = 0. Questions: 1) For what values of x, if any, is f(x) continuous? 2) For what values of x, if any, is f(x) discontinuous?
Reply by ●March 17, 20172017-03-17
On 17/03/17 19:03, Paul Rubin wrote:> Tom Gardner <spamjunk@blueyonder.co.uk> writes: >> It was sufficient for us to believe in the concepts, and to be able to >> understand the explanation of how polynomials (except 1/x) could be >> differentiated. > > I think that might have been a less rigorous treatment than is usually > seen in calculus classes where students get confused by limits. > > Here's a cute problem that can let you check your understanding. It's > about continuity rather than limits per se, but if you understand one > you probably understand the other. > > Consider the function f(x) defined as follows: > > if x is a rational number p/q in lowest terms, then f(x) = 1/q. > > if x is irrational, then f(x) = 0. > > Questions: > > 1) For what values of x, if any, is f(x) continuous? > > 2) For what values of x, if any, is f(x) discontinuous? >Just off the top of my head, I'd say it is continuous at all irrational x, and discontinuous at all rational x. But I am not entirely sure.
Reply by ●March 17, 20172017-03-17







