Dataset Viewer
Auto-converted to Parquet Duplicate
Document
stringlengths
395
24.5k
Source
stringclasses
6 values
package dev.deyve.algorithmsjava.sorting; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import java.util.Arrays; /** * Quick Sort */ public class QuickSort { private static final Logger logger = LoggerFactory.getLogger(QuickSort.class); public static Integer[] sort(Integer[] array) { return sort(array, 0, array.length - 1); } private static Integer[] sort(Integer[] array, Integer start, Integer end) { if (start >= end) { return array; } var boundary = partition(array, start, end); sort(array, start, boundary - 1); sort(array, boundary + 1, end); return array; } private static Integer partition(Integer[] array, Integer start, Integer end) { var pivot = array[end]; var boundary = start - 1; for (var index = start; index <= end; index++) { if (array[index] <= pivot) { swap(array, index, ++boundary); } } return boundary; } private static void swap(Integer[] array, Integer firstIndex, Integer secondIndex) { var temporaryVariable = array[firstIndex]; array[firstIndex] = array[secondIndex]; array[secondIndex] = temporaryVariable; logger.info("Swap: {}", Arrays.toString(array)); } }
STACK_EDU
[07:37] <brobostigon> morning boys and girls. [08:20] <zmoylan-pi> o/ [08:26] <brobostigon> o/ [09:43] <knightwise> hey peepz [09:45] <brobostigon> hi knightwise [09:46] <knightwise> just installed 18.04 on my old imac [09:46] <knightwise> very impressed so far [09:49] <brobostigon> :) [09:56] <knightwise> also installing it on my xps13 , cant use the windows version on my xps because of the GDPR [09:58] <brobostigon> i havent tried it yet, might roll a live usb before i upgrade, to test things out. [09:58] <zmoylan-pi> i usually wait a week or two after the release in case there are any whoopsies [09:59] <brobostigon> yes, hence my precaution of testing prior also. [10:00] <knightwise> its pretty clean . Its amazing how fast gnome/unity is right now [10:00] <knightwise> even on a dual core imac with 4 gigs of ram and a 128ssd [10:01] <knightwise> still no bluetooth love though. i think it has something to do with the firmware of the bluetooth chip of my xps [10:01] <knightwise> so no bluetooth mouse :( [10:02] <brobostigon> :( [10:02] <brobostigon> i had problems like that with the wifi on my ibm thinkpad. [10:02] <knightwise> which is a shame if you have a 1200 euro top of the line laptop and need to plug in an IR receiver for your mouse [10:02] <zmoylan-pi> i'm not fan of bt mice or keyboards. you think about a problem, you come up with a solution. you start typing and have to wait 5 seconds for bt to unsuspend... :-/ [10:03] <knightwise> Hmm.. dont have that problem very often [10:04] <zmoylan-pi> i've seen it on every bt keyboard so far and i've seen a fair few. haven't tried apple keyboard mind and they may have added a few shortcuts to make it more elegant [10:05] <knightwise> does anyone else have BT issues with their XPS ? [10:05] <zmoylan-pi> is anyone awake with an xps you mean :-) [10:48] <knightwise> zmoylan-pi: correct :)
UBUNTU_IRC
As part of the GMS contract for 2019/20 a new 'Quality Improvement' domain has been introduced which includes 'End of Life Care'. End of Life Care QI003. The contractor can demonstrate continuous quality improvement activity focused upon end of life care as specified in the QOF guidance. QI004. The contractor has participated in network activity to regularly share and discuss learning from quality improvement activity as specified in the QOF guidance. This would usually include participating in a minimum of two peer review meetings. Practices will need to: - Evaluate the current quality of their end of life care and identify areas for improvement – this would usually include a retrospective death audit (QI003) - Identify quality improvement activities and set improvement goals to improve performance (QI003) - Implement the improvement plan (QI003) - Participate in a minimum of 2 GP network peer review meetings (QI004) - Complete the QI monitoring template in relation to this module (QI003 + QI004) How to do a retrospective death baseline analysis (audit) Practices should review a sample of X deaths over the previous 12 months to establish baseline performance on the areas of care listed above and to calculate their expected palliative care register size. A suggested template to support data collection for the audit can be downloaded from here. The number of deaths each year will vary between individual practices due to differences in the demographics of the practice population. Practices could use the number of deaths reported in their practice populations in the previous year to assess how well they are identifying patients who would benefit from end of life care. An audit standard against which to assess current practice would be that the practice was successfully anticipating approximately 60% of deaths. There are reports available which can be accessed at 'Ardens > Conditions | Frailty and End of Life > Activity Last Year'. Information about the Ardens 'End of Life and Palliative Care' template can be found here. End of Life register reports are available at 'Ardens > Team | Meetings > End of Life'. End of Life Report Output To enable practices to breakdown data quickly and easily there is a report output setup called 'End of Life'. How to use a Report Output - Run your chosen report - Right click on the report and select 'Show Patients' - Just above the report list click on 'Select Output' - A new window will open, select 'Pre-defined report output - Select 'End of Life' and click 'OK' You should then be able see all the relevant data on one screen which you can easily export to excel.
OPCFW_CODE
Message from a shy user How can I make sure I'm not distracting researchers from their work, with a bunch of questions. I feel I could ask a lot, primarily because I have studied very little. I don't think I could ask a lot, if I had studied a lot. I don't really think my questions will not permit the descriptive aspects of the science to progress. I, also, think, my questions, might lead to progress. Am I invited to ask, on this site? Thanks. That's actually a great question. I think the is a balance, and whether to post or not is something you will have to decide about. Here are some considerations before posting... You should not ask questions that are already answered on this (or other) stack exchange site. This necessarily requires you do some sort of research. I think (at least) you should try to google the question phrased in a bunch of different ways. It's fairly possible that the answer does exist online, but you just don't know what to search for, but that is fine, but it should be clear that you (or the poster) made an effort to find the answer your/themselves. There is also the factor of "how much useful is this going to be for others". Those that are of general interest are always very welcomed questions. You can see that some of the old questions are really heavily used resource, while some other are so specific are seen only few times. So, more general question you will have, the better. Sometimes it is good to think if you can phrase your question in a more general way. Finally, consider how frequently you post yourself and how keen are people to help you out (I think there is a bit of common sense). If it feels all alright, it probably is. SE folks are not the subtlest in explaining to people what's sub-optimal about their questions. So to sum up, you can ask as many relevant questions as you want, even if they are basic, if you don't annoy the heck out of the community (and you will be able to tell). Stuff to make people like your questions: Be concrete, give details, but cohesive and preferentially reproducible (with toy data and small snippets of code). Show initiative and do your own research, that will both help others to understand the true source of your problem, but also make you more relatable. Disclaim everything that should be disclaimed - is it homework? Are you a developer of the method you are discussing? etc. Be polite and kind, but don't write "Thanks" or apologize for basic questions, that is all fine. Gratitude should be expressed through upvotes or accepting the right answer (that makes the answer more useful for the next person facing the same problem). So don't worry THAT much, just ask :-)
STACK_EXCHANGE
How to test new nameserver before to make it live Possible Duplicate: Testing nameserver configuration using it I'm thinking of changing from my hosting provider nameservers to Route53 (Amazon's distributed nameserver) for several reasons. I'm currently setting all records like they are on my current host page (I can see my DNS settings but I cannot change them). Since I'm not used to working with Route53's hosted zones, is there a way I can test the new nameserver resolution settings before updating the domain to point to the new nameserver? For example, I'm not sure if the last dot in CNAME records is necessary or not... Use dig dig mydomainname.example @mynewnameserver.example You can easily do that using nslookup, the process is next: 1) Enter nslookup 2) Run server $YourDNSServerName where $YourDNSServerName is the one of the DNS servers responsible for your zone at Route53 like nslookup ns-131.awsdns-16.com 3) From there just enter your records and see the responses. Thanks, it works!! Can you please help me to understand what should i set for these records? @ IN MX 1 aspmx.l.google.com. @ IN MX 5 alt1.aspmx.l.google.com. @ IN MX 5 alt2.aspmx.l.google.com. @ IN MX 10 aspmx2.googlemail.com. @ IN MX 10 aspmx3.googlemail.com. I should leave the domain blank, am I right? yes if you're setting MX records for the domain, the domain should be blank. Use dig and not nslookup for any serious DNS debugging. nslookup has many known flaws and has been depreciated. http://veggiechinese.net./nslookup_sucks.txt Set up your name server and then configure a test machine to use it for DNS. I'm not sure what client system you're using, but since I'm on a Windows box, at the moment, a screenshot of where you'd put that new nameserver value on a Windows machine. Apply those DNS settings to a number of machines for testing purposes, use as normal, and ensure nothing's broken. This will not work. This screen allows you to enter the recursive nameservers you want to use. This is certainly not the way to test new authoritative nameservers (a lot of things will start to break as you will get DNS replies for only a very small subset of names, or even none at all). Why wouldn't that work? The flow goes: 1. Change Windows to use new authoritative nameservers. 2. Test resolution of relevant domain names. 3. Change back to some recursive nameservers. - During number 2 you won't be able to resolve most other domain names so you might want to be aware if you're testing a website that some content such as from a CDN may not load but apart from that it seems fine to me.
STACK_EXCHANGE
Or at least the start of my career in IT… I was fortunate and had a job offer lined up. My college was big on internships. I don’t blame them, it was a great way to start in your field and get some experience. Figure out what you like and perhaps even more important, what you don’t like. I started college as a Information Security major. Red Team vs. Blue Team, intrusion detection, ethical hacking. That was what high school me thought I wanted to do. That dream faded my sophomore year after some grueling network courses. I understood the fundamentals and the practical side of it all. But once we got into RSTP and BGP, I lost interest pretty quickly. This was compounded by a foreign professor who decided to take a vacation for the first few weeks of the semester leaving us to struggle through complex labs with little instruction or feedback. I made a small pivot just in time for a internship in systems administration. While I had a bit less course work in this particular area. I was a quick read and motivated to learn. My job was to sit on an outlook mailbox and process user account creations/deletions and permission changes. Exhilarating. The actual process required a lot of data entry and providing paper-trails for auditing purposes. The same date would be entered multiple places which led to the potential for human error. This was data entry. There was little technical skill required once you had the process down. I got to the end of the first week and said to myself. I’ve gotta find an easier way to do this. The systems I was using were primarily AIX, an old IBM offshoot of Unix popular in the enterprise space. Lacking the modern conveniences of modern bash environments… I was left with Kornshell. Not being familiar with it. I simply looked for a way to automate my own input to the remote sessions that I was spinning up with putty all day. Luckily putty allows you to pipe in commands, and thus V1 of automating my job was born. putty.exe [email protected] -m c:\local\path\commands.txt I slapped some PowerShell in front to query for the parameters that I needed, saved the output to a file, then passed the file off to putty to get executed remotely. Voila! Quick and dirty. I blasted through the backlog of work left by the previous intern in just a day or two. And just as quickly found myself sitting at my desk with nothing to do. Version 2 came with some upgrades. I looked up some kornshell, and built out a little CLI for it all. Made some scripts for the common tasks. For instance with password resets, all I had to was enter the username and the request number. The script would generate a temp password, reset it, unlock the users account if locked, email the user, and save off the log for auditing. This freed me up to take on some real work.. aka not intern work. My team saw the work I put in and while ultimately I didn’t get a position on this team. I did stay with the company, and not in system administration, but in development.
OPCFW_CODE
Yesterday I spent a lot of time,faffing around, trying to establish the best way to allow guests access to our Internet connection without compromising our network, while at the same time being able to filter the content they can access online....... Now, I still feel like I'm back at square one after contemplating solutions that appear to be too costly, or have a low chance of success given my current setup. I'd like to achieve the following:- A guest wireless access point running >separate SSID from existing network >The ability to filter content >User access restriction...........(voucher/generated pass key's/ accounts would be a bonus) The reason I'm looking to have the above implemented is to that I can eventually allow staff access to it aswell as guests, for use during their breaks. Any advice would be appreciated! Depends what your current network looks like. If you have a professional firewall/content filtering appliance, you might be able to set this solution up without any extra hardware. Lets say for a minute that you don't though, you could provision an old machine, install a community firewall distro (Endian would work fine), and have it running as a DHCP server, firewall, content filter etc, set it up to run on a different subnet, if your using 255.255.255.0 atm, then use 255.255.248.0 or something, and use a different IP range, again if you have something like 10.0.0.* at the moment, then use 172.16.*.* or 192.168.*.*, at that point you'll have a working solution to the point of where the guest network meets your work network. How to proceed beyond that is relative to your situation, what is your current network setup at the moment? Do you have a firewall at the edge, and what do you have for employees in terms of content filtering, i'm curious, because if you have say for example, forced authentication with a proxy for Internet access, that integrates into ADS, or any other number of possible scenarios in place then you'll run into issues trying to get the guest network out to the Internet because they'll be using non domain accounts; you get the picture. How is your work network setup at the moment? I can see ways to do some of what you want, but not all of it from your existing set-up; and I don't think it would possible to make it flexible enough to then add staff access later. As previously advised, I still think that your best option would be the Bluesocket system. I've used these people before http://www.westcomnetworks.co.uk/ they are very knowledgeable, easy to work with and very helpful. Give them a call and ask if they can arrange for a demo of the Bluesocket equipment - I'm sure that they would be more than willing to talk to you about it and even lend you a device to test out. CPR + more is an IT service provider. Do you have a UTM? If not Look into the Sophos UTM. It will help secure your network, give vpn access and on top of that it works as a wireless controller (for sophos brand WAPS). You can create several wireless networks and do some nifty stuff with guest access. Here is the link: http:/ if you have any questions please feel free to ask. Ubiquiti also makes a great solution for Wifi and Guest networks at a VERY reasonable price. Check out their Unifi series Access points. Their standard access points start around $75.
OPCFW_CODE
Yesterday I was trying to install the new SharePoint 2010 beta on a virtual machine and had a little bit of fun. Well actually the install went really easy and I started with the full SharePoint 2010 Enterprise Beta. The fun started with the configuration wizard it got 5 steps in and failed with a Timeout Exception. After simply retrying it and getting the same thing I started looking around and found this forum post. It talks about lack of memory and how SharePoint 2010 needs a lot of memory to install and I tried like mad to make sure the VM had enough memory. End of the story this turned out to not be what was causing my blocking issue. As a side note, I sure hope the SharePoint team doesn’t make it so SharePoint does need that much memory for a basic install as suggested. There is a good blog post by Jie Li that was linked to by the above forum post that was helpful. First it had the product keys – Microsoft mailed me some but it took over 24 hours for me to get them (same ones) and nothing on the download pages really told me where to find them (at least I didn’t see it). The post also has some hotfixes that you have to have depending on your OS and configuration. Additionally, if your running on a domain controller which I was it had some setup to get the sandbox up and running. Now back to the timeout exception, it was still happening and honestly if I didn’t need SharePoint for something I’m working on I would have thrown it to the side and not looked back. Being determined I tried several different memory configurations and determined it had nothing to do with memory, and further through SQL profiling determined it wasn’t a database timeout either. From this forum post I got the idea it might be a service start up issue. Originally, I didn’t pay enough attention to this post because it talked about a type not loading witch didn’t match my error. Later in the post it talks about service not starting and to try it manually, that didn’t show the error either. Additionally, it talked about registry keys to delete which turned out I didn’t have those keys. Getting frustrated and desperate to get this configured I started to get more creative. Since I was on a virtual machine I figured worse case I restart from last snapshot so I got brave and deleted a key at a time. For me the following key did the trick and the config zoomed along past the error to a successful completion. So which key? Inside the following registry path.. [HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Shared Tools\Web Server Extensions\14.0\WSS\Services\ I deleted the following key I’m normally not one to post about or suggest this type of black magic fix, but in this case it made the difference of me using the beta or not so here it is.
OPCFW_CODE
Custom Govt Compliant Websites Component-based programming has grown to be extra well known than in the past. Barely an software is created today that does not involve leveraging components in certain variety, generally from diverse distributors. As apps have grown much more refined, the need to leverage parts distributed on distant equipment has also grown. An example of a component-based software is undoubtedly an end-to-end e-commerce option. An e-commerce application residing on the Website farm requires to post orders to your back-end Company Resource Setting up (ERP) application. In lots of circumstances, the ERP software resides on distinct components and might run on a various operating system. The Microsoft Distributed Part Object Model (DCOM), a distributed object infrastructure which allows an software to invoke Component Item Product (COM) parts set up on one more server, has actually been ported to a number of non-Windows platforms. But DCOM has not attained huge acceptance on these platforms, so it really is hardly ever made use of to aid interaction amongst Windows and non-Windows desktops. ERP application vendors normally develop elements for that Windows system that talk along with the back-end process by way of a proprietary protocol. Some companies leveraged by an e-commerce application may possibly not reside in the datacenter in the least. One example is, in the event the e-commerce software accepts credit card payment for goods purchased because of the client, it should elicit the companies with the service provider financial institution to method the customer's bank card details. But for all practical functions, DCOM and linked systems this kind of as CORBA and Java RMI are limited to apps and parts mounted in the corporate datacenter. Two key motives for this are that by default these technologies leverage proprietary protocols and these protocols are inherently connection oriented. Clients speaking using the server via the internet facial area many prospective limitations to speaking with the server. Security-conscious network administrators around the world have applied corporate routers and firewalls to disallow basically each and every type of interaction over the web. It often can take an act of God for getting a community administrator to open up ports outside of the bare minimum. If you're fortunate more than enough to get a community administrator to open up the suitable ports to guidance your support, likelihood is your clientele will not be as fortuitous. Like a consequence, proprietary protocols these people used by DCOM, CORBA, and Java RMI are usually not realistic for Internet eventualities. The other challenge, as I reported, using these systems is the fact they are inherently connection oriented and so can not deal with network interruptions gracefully. Because the World wide web is not below your direct control, you cannot make any assumptions in regards to the good quality or dependability from the connection. If a community interruption occurs, the next contact the client tends to make for the server could fail. The connection-oriented nature of these systems also makes it demanding to create the load-balanced infrastructures required to attain significant scalability. As soon as the connection in between the shopper plus the server is severed, you can not only route the subsequent ask for to another server. Developers have made an effort to conquer these constraints by leveraging a model called stateless programming, nevertheless they have experienced minimal accomplishment since the systems are fairly hefty and make it pricey to reestablish a link having a remote object. Because the processing of a customer's bank card is completed by a distant server on the World-wide-web, DCOM will not be perfect for facilitating interaction concerning the e-commerce consumer and also the bank card processing server. As within an ERP option, a third-party component is commonly mounted in the client's datacenter (in such a case, from the credit card processing option service provider). This ingredient serves as tiny in excess of a proxy that facilitates interaction between the e-commerce software package as well as the service provider lender through a proprietary protocol. Do you see a pattern listed here? Due to the limitations of current systems in facilitating conversation between laptop or computer methods, computer software distributors have usually resorted to constructing their own individual infrastructure. What this means is methods that can happen to be used to add enhanced features into the ERP process or the credit card processing process have instead been devoted to producing proprietary community protocols. In an effort to better guidance these kinds of Net eventualities, Microsoft in the beginning adopted the method of augmenting its present technologies, which include COM Online Providers (CIS), which allows you to set up a DCOM connection in between the client plus the distant element above port 80. For numerous good reasons, CIS was not extensively approved. It grew to become crystal clear that a completely new technique was necessary. So Microsoft made a decision to deal with the issue within the bottom up. Let's evaluate several of the necessities the solution needed to meet in order to realize success. - Interoperability The distant company needs to be capable to be eaten by shoppers on other platforms. - Internet friendliness The solution really should get the job done effectively for supporting purchasers that accessibility the remote support with the World-wide-web. - Strongly typed interfaces There ought to be no ambiguity concerning the kind of information despatched to and been given from a distant service. On top of that, datatypes defined through the remote support ought to map moderately very well to datatypes described by most procedural programming languages. - Ability to leverage current Net standards The implementation from the distant service need to leverage current Net specifications as much as you can and keep away from reinventing alternatives to difficulties that have presently been solved. An answer designed on commonly adopted World-wide-web criteria can leverage present toolsets and merchandise designed for the know-how. - Support for almost any language The answer shouldn't be tightly coupled to some specific programming language. Java RMI, such as, is tightly coupled to your Java language. It could be challenging to invoke functionality with a distant Java object from Visible Basic or Perl. A client must manage to put into action a fresh Web assistance or use an current World wide web company regardless of the programming language in which the shopper was written. - Support for almost any distributed part infrastructure The answer should not be tightly coupled to the individual ingredient infrastructure. Actually, you shouldn't be necessary to order, install, or keep a dispersed item infrastructure just to create a completely new distant provider or eat an present service. The fundamental protocols need to aid a base volume of conversation concerning present dispersed object infrastructures such as DCOM and CORBA. Given the title of this guide, it should really occur as no shock that the answer Microsoft produced is understood as Website services. An internet company exposes an interface to invoke a specific action on behalf of the shopper. A customer can access the web support via the usage of World wide web benchmarks. Web Companies Building Blocks The next graphic shows the main creating blocks required to facilitate distant conversation involving two programs. Continue Reading US Government Contractor
OPCFW_CODE
Jump to content Posted 01 February 2013 - 07:05 Posted 01 February 2013 - 15:59 Posted 01 February 2013 - 20:23 Posted 03 February 2013 - 10:20 What driver are you using? I am/was using the 13.1 driver from AMD's website. Weird, I have no problem like this, it might be driver related (though, we share the AMD GPU brand),I am not a pro. The first and third problems, at least, are most likely a result of the proprietary AMD graphics drivers. I would highly recommend purging them and using the open-source radeon driver instead. Your video card is very well supported by radeon, and you will almost certainly have fewer problems with it. Edit: I recommend that you read through this thread. It has lots of interesting details that you may find helpful. Posted 03 February 2013 - 17:11 I removed fglrx and it did indeed fix the flicker & virtual terminal issues, it seems to have fixed my issue with resuming from suspend as well. The only downside with what I'm using now is the performance in games is bad. My output from glxinfo | grep renderer is OpenGL renderer string: Gallium 0.4 on AMD CYPRESS, is that correct? I chucked together two scripts, one to install fglrx (when I game) and one to swap back to radeon (when I'm not), I'm guessing I'd need a reboot in-between but that's not really an issue. Would this be a sensible solution or would it cause issues in the long run? Posted 03 February 2013 - 20:01 That is an absolutely terrible idea! I strongly recommend that you don't swap drivers on a regular basis. If you really feel like you MUST swap drivers when you game, the least-bad idea is probably to install fglrx from the repository, generate a xorg.conf to force X11 to use radeon when you start your computer, then create a script to stop X11, load X11 with fglrx (and maybe a low-resource, non-compositing window manager, such as Openbox, to get higher framerates) so that you can game. Your script should probably be capable of switching back as well. Posted 04 February 2013 - 00:44 Posted 04 February 2013 - 17:11
OPCFW_CODE
package com.alphasystem.docbook.builder.test; import org.docbook.model.*; import java.util.ArrayList; import java.util.List; import static java.lang.String.format; /** * @author sali */ public final class DataFactory { private static ObjectFactory objectFactory = new ObjectFactory(); public static Emphasis createBold(Object... content) { return createEmphasis("strong", content); } public static Caution createCaution(Object... content) { return objectFactory.createCaution().withContent(content); } public static Entry createEntry(Align align, BasicVerticalAlign vAlign, Object... content) { return createEntry(align, vAlign, null, null, null, content); } public static Entry createEntry(Align align, BasicVerticalAlign vAlign, String nameStart, String nameEnd, String moreRows, Object... content) { return objectFactory.createEntry().withAlign(align).withValign(vAlign).withNameStart(nameStart).withNameEnd(nameEnd) .withMoreRows(moreRows).withContent(content); } public static Emphasis createEmphasis(String role, Object... content) { return objectFactory.createEmphasis().withRole(role).withContent(content); } public static Example createExample(String title, Object... content) { return objectFactory.createExample().withTitleContent(createTitle(title)).withContent(content); } public static Important createImportant(Object... content) { return objectFactory.createImportant().withContent(content); } public static InformalTable createInformalTable(String style, Frame frame, Choice colSep, Choice rowSep, TableGroup tableGroup) { return objectFactory.createInformalTable().withTableStyle(style).withFrame(frame).withColSep(colSep) .withRowSep(rowSep).withTableGroup(tableGroup); } public static Emphasis createItalic(Object... content) { return createEmphasis(null, content); } public static ItemizedList createItemizedList(String id, Object... content) { return objectFactory.createItemizedList().withId(id).withContent(content); } public static ListItem createListItem(String id, Object... content) { return objectFactory.createListItem().withId(id).withContent(content); } public static Literal createLiteral(String id, Object... content) { return objectFactory.createLiteral().withId(id).withContent(content); } public static Note createNote(Object... content) { return objectFactory.createNote().withContent(content); } public static OrderedList createOrderedList(String id, Object... content) { return objectFactory.createOrderedList().withId(id).withContent(content); } public static Phrase createPhrase(String role, Object... content) { return objectFactory.createPhrase().withRole(role).withContent(content); } public static Row createRow(Object... content) { return objectFactory.createRow().withContent(content); } public static Section createSection(String id, Object... content) { return objectFactory.createSection().withId(id).withContent(content); } public static SimplePara createSimplePara(String id, Object... content) { return objectFactory.createSimplePara().withId(id).withContent(content); } public static Subscript createSubscript(String id, Object... content) { return objectFactory.createSubscript().withId(id).withContent(content); } public static Superscript createSuperscript(String id, Object... content) { return objectFactory.createSuperscript().withId(id).withContent(content); } public static Table createTable(String style, Frame frame, Choice colSep, Choice rowSep, Title title, TableGroup tableGroup) { return objectFactory.createTable().withStyle(style).withFrame(frame).withColSep(colSep).withRowSep(rowSep) .withTitle(title).withTableGroup(tableGroup); } public static TableBody createTableBody(Align align, VerticalAlign verticalAlign, Row... rows) { return objectFactory.createTableBody().withAlign(align).withVAlign(verticalAlign).withRow(rows); } public static TableGroup createTableGroup(TableHeader tableHeader, TableBody tableBody, TableFooter tableFooter, int... columnWidths) { List<ColumnSpec> columnSpecs = new ArrayList<>(); for (int i = 0; i < columnWidths.length; i++) { ColumnSpec columnSpec = objectFactory.createColumnSpec().withColumnWidth(format("%s*", columnWidths[i])) .withColumnName(format("col_%s", (i + 1))); columnSpecs.add(columnSpec); } return objectFactory.createTableGroup().withCols(String.valueOf(columnWidths.length)).withTableHeader(tableHeader) .withTableBody(tableBody).withTableFooter(tableFooter).withColSpec(columnSpecs); } public static TableFooter createTableFooter(Align align, VerticalAlign verticalAlign, Row... rows) { return objectFactory.createTableFooter().withAlign(align).withVAlign(verticalAlign).withRow(rows); } public static TableHeader createTableHeader(Align align, VerticalAlign verticalAlign, Row... rows) { return objectFactory.createTableHeader().withAlign(align).withVAlign(verticalAlign).withRow(rows); } public static Term createTerm(Object... content) { return objectFactory.createTerm().withContent(content); } public static Tip createTip(Object... content) { return objectFactory.createTip().withContent(content); } public static Title createTitle(Object... content) { return objectFactory.createTitle().withContent(content); } public static VariableList createVariableList(String id, Object[] content, VariableListEntry... entries) { return objectFactory.createVariableList().withId(id).withContent(content).withVariableListEntry(entries); } public static VariableListEntry createVariableListEntry(ListItem listItem, Term... terms) { return objectFactory.createVariableListEntry().withTerm(terms).withListItem(listItem); } public static Warning createWarning(Object... content) { return objectFactory.createWarning().withContent(content); } }
STACK_EDU
Director of Big Data Engineering @ From 2015 to Present (less than a year) Principal Software Engineer @ From June 2012 to September 2015 (3 years 4 months) Senior Software Engineer, Cloud R&D @ - Designed, prototyped and built production version of the new EnergyScape web site using Node.js / MongoDB on Senior Software Engineer with Lead experience. Director of Big Data Engineering @ From 2015 to Present (less than a year) Principal Software Engineer @ From June 2012 to September 2015 (3 years 4 months) Senior Software Engineer, Cloud R&D @ - Designed, prototyped and built production version of the new EnergyScape web site using Node.js / MongoDB on Linux stack. This customer portal will further SCIenergy’s business development while demonstrating their core mission in a powerful and actionable way. - Made EnergyScape.net a compelling mobile web experience, leveraging Bootstrap from Twitter and Require.js, to enable benchmarking for business managers and data acquisition for “Building Engineers” without the anchor of a computer. - The project was deployed in the Amazon Cloud (AWS) leveraging EC2 and its load-balancing feature, S3, CloudFront, SES and CloudWatch. This Node.js solution running in AWS enabled rapid deployments to a cluster of servers with very minimal infrastructure costs. - Built deployment script to automate daily code pushes proving essential to the “release early / release often” approach. - Set up Mercurial and ticketing system through Bitbucket.org to enable better cooperation and communication with the contractors and the product manager on the project. - Acted as project manager and lead developer delivering on-schedule and completion of the Beta phase. Key milestones were met enabling successful customer and industry partner presentations From November 2011 to August 2012 (10 months) Chief Architect @ From September 2010 to December 2011 (1 year 4 months) Senior Software Engineer @ - Implemented Hadoop Map/Reduce jobs to feed data into FAN’s audience insights system. - Built and maintained multiple external and internal facing Web and Stand alone Java applications using popular frameworks and libraries including but not limited to Spring (MVC), JSF, Hibernate, Ibatis, Quartz, Google Protocol buffer, JUnit, EasyMock, Powermock and tools including but not limited to Eclipse, Tomcat6 and Maven2. - Mentored junior Java and .Net developers during my 3+ years at FAN to increase - Initiated a .Net weekly study group to go over newer aspects of the framework and potential use in our projects. - Interfaced the remote half of my team in Atlanta with the local QA, release, dba and ops teams to ease communication and remove contention points. - Refactored, improved and maintained FAN’s white label Social Network system written in C# against numerous vertically partitioned SQL Server databases, which was available on various Newscorp’s entities’ web sites like Fox News, American Idol, Fox Weather, Fox Highlights etc... From March 2007 to September 2010 (3 years 7 months) Senior Software Developer @ Designed, created and implemented enhancements and new functionalities to the Bureau of Labor Statistics’ (“BLS”) software application called TopCati using VB.Net 1.1. TopCati is a distributed application to collect US workforce data including management functionalities using Oracle 9g and MS Access 2003 as database back ends and Crystal - Modified Oracle database to fit new structure and performance needs and maintained and troubleshoot TopCati source code. - Advised project manager on .Net software architecture, increasing product stability and - Created tools to increase productivity and efficiency between BLS national office and regional data collection centers. Tools written in VB.Net and C# include: ASP.Net website to manage Crystal Report files and outputs, encryption tool for sensitive Web.config information, filter builder to automate TopCati filter creation and Windows service to monitor and replicate folder and file structure between servers for application mirroring. - Initiated and lead a weekly .Net 1.x and 2.0 study group to facilitate knowledge transfer amongst colleagues. From July 2006 to February 2007 (8 months) Lead Software Developer @ Managed the successful design and development, on deadline and under compressed schedule, of a Migration Wizard tool using .Net 2.0, to convert current Info Pak customers to next generation products, increasing RWD revenues and retaining market share. - Researched, recommended and implemented refinements to products converted from Visual Basic 6 to .Net, capitalizing on inherent efficiencies in .NET to enhance RWD’s Info Pak - Maintained and enhanced the Info Pak suite over time yielding high customer satisfaction. In 2005, sales were 188% of goal. As of 1st Quarter 2006, maintenance contract renewals exceeded the forecast to date. - Served as Tier3 product support for Info Pak to troubleshoot customer issues, yielding close to a 100% customer satisfaction rate with personal testimonials from Johnson & Johnson and Home Depot, expressing satisfaction with the quality of support received. - Trained junior developers on Info Pak and RWD procedure, increasing development team productivity through cross-training. - Mentored junior developers on VB.Net, C#, ADO.Net and web-oriented technologies such as ASP.Net and Web Services, fostering teamwork and increasing the team knowledge base. - Created several internal RWD software tools such as product support problem diagnosis and SAP payroll output formatting. Presented and distributed these tools for RWD employee use. From November 2004 to July 2006 (1 year 9 months) Software Tester @ Tested products for Welocalize’s client Manugistics, finalizing and perfecting their cutting-edge software technologies. - Sorted and resolved client software issues and trained new testers on Manugistics’ products. From June 2004 to November 2004 (6 months) Analyst-Programmer (Object Oriented Programmer) @ Enabled completion of the company’s core product, “MP’Com,” a multi-platform, multi- protocol, file transfer automated system. Created functions such as data encryption and decryption, data compression, PDF conversion and data sorting. - Conceptualized, proposed and developed “MP’Event Manager,” a multi-protocol software in Delphi6, to immediately inform MP’Com administrators of pre-set MP’Com events, enabling managers to respond rapidly. Developed “MP’Spawn” in Delphi6 to perform certain tasks automatically upon receipt of a notice from MP’Event Manager, on Windows or on Linux through Telnet remote control. These products generated over 60,000 Euros for the company. - Innovated a “light” version called “MP’ComPro” using Delphi6 and Kylix2, to serve small business needs. Enabled the company to penetrate a new market niche, increase overall market share, and generate 42,000 Euros in revenues. Used on either Linux or Windows with LAN and FTP protocols, MP’ComPro provides key functions such as the encryption and compression of data. - Recognized critical elements missing from Eukles’ IT operations and corrected them. Revised the company’s network to improve test quality and employee productivity. Implemented a backup server and a file share system. Created the company’s first PHP/MySQL intranet hotline database. From April 2002 to November 2003 (1 year 8 months) Analyst-Programmer (Object Oriented Programmer) @ Managed and developed the research and planning capabilities of one of PS’Soft’s principal products, the “Qualiparc Business Process Manager,” a DLL for Microsoft IIS that streamlined business processes for companies with an average starting point of 500,000 end functionality into the DLL using Delphi5, and ensured compatibility with Database Management System, Oracle, SQL Server, Sybase and DB2. From July 2001 to January 2002 (7 months) Conservatoire National des Arts et MItiers From 2001 to 2003 BTS, Computer Engineering; Analyst @ LycIe Estienne d’Orves From 1999 to 2001 Yann Luppo is skilled in: .NET, Java, Node.js, MongoDB, Hibernate, Spring, ASP.NET, jQuery, Amazon Cloud, Microsoft SQL Server, PostgreSQL, MySQL, Membase, RabbitMQ, Mercurial Looking for a different Get an email address for anyone on LinkedIn with the ContactOut Chrome extension
OPCFW_CODE
M: Instant Company - jstedfast http://nat.org/blog/2011/06/instant-company/ R: nikcub I think these 'what products and services does your startup use' type articles are more interesting than the usesthis series about what tools developers use. Somebody should setup a blog where they interview a startup founder each week and just ask them to list services they use along with a mini single-paragraph review of each. Edit: after thinking about it, I might just do this as a weekend project. A quick search and I couldn't find anything similar, the closest I remember is the Ajaxian blog startup interviews which they stopped doing. If you would like your startup featured email me, ill be reaching out to a few people so if there is interest I will likely get it going R: mattmanser These pop up quite often and personally I find them quite boring. A lot of it is personal choice, e.g. IRC & campfire being 'laggy', for me Google apps is meh apart from mail/calendar, you better pony up for MS office if you're dealing with a lot of other businesses, themeforest I find extremely hard to find a decent looking, _well written_ html template, most of them are div crazy, extremely heavy CSS/js payloads or use cufon, kerrschpitt. And assistly looks like a total rip off at $69 p/m per user (to _me_ anyway). I mean swipe might make an interesting submission in itself, but the homepage is light on details, looks like it's in a closed beta, which probably means US only, no good for me. Anyway tl;dr is that the tools your business uses are very personal choices of services many of us already know about, I find them dull. What's more interesting is what's missing, no accounting system, no bug tracking, no server uptime monitor, no analytics, no A/B testing. R: patrickod You're right; Swipe is in closed beta at the moment. An email never hurts though R: seats Great list, but to me the last two items aren't like the others. Everything about starting tech companies has gotten easier and cheaper, but accountants and lawyers haven't really changed all that much. He didn't specify exactly how much they are paying for those two, but it still sounds like it will be a fairly beefy hourly rate or a retainer + equity. I think for a boostrapped company these are still your two really big overhang costs where people end up weighing going without or dyi versus committing to legal or accounting as your biggest up front operating expense. Of the two, I'd say accounting has probably changed the most, there are plenty of workable software solutions for keeping books that aren't too bad and it seems like there are plenty of people trying to build startups around that particular problem. Can't say the same on the legal item though. R: mcdowall Great list! Using a few of those myself If i can be cheeky I'd love an intro to the guys at Stripe, think it was a fair few months ago I registered my email for their Beta and would love to implement it for my startup. R: saikat Hey (Saikat from Stripe) -- not cheeky at all, but certainly flattering. Sorry we've been kind of quiet (we do read Hacker News, though). We're just working hard to implement the feedback we've been getting from our existing users, and we want to make sure our product scales well and gets better as new people use it. Here's a question: any chance you would be interested in having us watch you integrate Stripe? We've been doing this lately to try to make sure our first- run experience is really good. Send me an e-mail () either way. R: s00pcan Stripe was something on the article I hadn't heard of before. It just seems so logical for cardholder information to go directly from the customer to the payment processor using javascript that I wonder why it hasn't been done before and what you're doing differently. Can you explain? R: kolektiv Well, hosted payment is not a new thing at all - so you iframe or link to a page you don't host which the customer uses - thus ensuring that card details don't hit your servers and don't give you a PCI surface. This is a fairly logical extension, I would guess that the reason it hasn't caught on more is because a JS requirement has typically been a red flag in e-commerce - 3% of users not being able to pay you once they got to that point of a funnel could be seen as disaster. Interesting, because we're looking at mandating JS in our new developments (background: company I work for does a lot of high end e-commerce - we're specialists). In theory it's a good idea (that side of it at least) but I don't know how security perception and customer acceptance rates will go. R: s00pcan Oh, of course. I completely forgot that there are some crazy people out there who browse without javascript. I was just jumping at the idea of reducing PCI compliance issues - I've had to deal with them and it's a huge project. R: there now someone needs to make something to use the APIs of all these sites to be able to control users across all of them from a single location. bringing on new employees or terminating existing ones and having to do it across half a dozen different sites sounds kind of tedious and error-prone. R: tripzilch Great point. I noticed the same thing. First you get your Google Apps account, and then the passwords for the other accounts are mailed to there, then two weeks later you find that one of the systems has been replaced in favour of another one. Indeed tedious and error-prone. And that's just from the employee's point of view, the administrator having to create all these different accounts is probably even less happy about it. R: benjohnson eFAX !?!?? eFAX is evil when you try to close your service - you have to go through their horrid 'chat' system and even then I had to cancel my credit card to get them to stop charging. And no... it's not just me: <http://daviddahl.blogspot.com/2006/05/efax- sucks.html> R: rabidonrails Launched <http://phaxio.com> into beta a couple of weeks ago...shoot me an email if you'd like an invite (email in profile) R: pbreit Any way to upload or email a PDF? R: rabidonrails Absolutely! We have an API that allows you to POST files to fax. R: kinkora For a web-based company, I would add Amazon Web Services(AWS) at the top of the list. AWS is relatively expensive but if you are a startup with a limited amount of capital and need to scale quickly, it allows you to utilize a corporate grade web/computing/server/database infrastructure without having to build one yourself. R: athst Interesting list, I'd be interested to see what other "stacks" companies are running on. R: spullara We don't list out all the business services, though we should add them now, but we do have our technology and services stack for production: <http://bagcheck.com/bag/382-bagcheck-technology> R: timsally It's an interesting contrast how cheap the technical tools are compared to the financial and legal skills retained. I'm not sure if Ropes & Gray does something special for early stage companies, but they are a top and expensive firm. R: statictype What advantage do these group chat apps have over something like Skype? R: alanh No spammers, for one. Skype's iOS app is absolutely terrible for chat, too; HipChat's is passable, and of course with IRC you will have a few options. R: vijaymv_in Amazing list. I am wondering how do you handle signatures R: omouse Their committment to free/open source software is astounding! </sarcasm> R: clistctrl I didn't really find the article that interesting, however looking at this <http://xamarin.com/> company I'm extremely intrigued by the product.
HACKER_NEWS
How fast should I be able to work through Spivak I am currently self-studying Spivak’s Calculus. Unfortunately I did not have the chance to take math courses in college so I haven’t been formally taught proof based mathematics-I’m trying to learn now from Spivak+a copy of the answer manual. I typically read a chapter twice, then jump into the problem set. I can only do a few of the early problems easily and all the way accurately, but I can make some progress on some of the later problems. Then I look at the answer for the first problem I can’t solve, copy it down, and try to understand why it works. Once I can prove it from memory, that proof technique is generally enough to let me prove the next several problems. When I get stuck in a section I once again look at one solution and often this lets me make great progress on the others. I repeat until I can do most of the non-starred problems and move on to the next chapter, or if I’m really stuck I take a break for a couple of weeks and when I come back it’s easier. However, since I have no standard of comparison, I’m not sure how to tell if I’m any good at this. Obviously there’s merit in doing math at any pace, but I’d still like to know if I’m struggling way too much (and should move to something easier), if I’m on pace, or if I’m doing really well. At about what pace and with what level of accuracy should a competent math student be moving through the problems of Spivak’s Calculus? How many hours/days/weeks should a chapter take me? Am I wasting my time, or is math just slow? Math is slow. And it takes a variable amount of time to get through. Math is the occupation of the patient. You are not working at an unusual pace. It would be easier if you have someone to work with, or better still, guide you. Different people have different aptitudes. @copper.hat Yup, and that aptitude isn't universal across all of math. Heck, aptitude for learning concepts varies even within texts. For me, what could take me a week to work through on my own could be 5 mins. with a (suitable) friend. I find this true not just in mathematics. @copper.hat Reminds me of some terrible nights learning Q Mech all by myself in my school library :( @DonThousand: I think I am incapable of learning on my own :-). Thanks for the responses, guys. Is there any good place on here to find such a buddy? @Samuel Sadly, no. There are nice chatrooms, though, if you want to talk out some problems. Alas. Well, I'm moving pretty slowly as I work another job ~70-80 hours a week, so I'm not sure it would be easy to find a buddy who works at my pace of nothing at all and then a huge burst on breaks/some weekends. Thanks anyway! Also, is it worth working through all of Spivak, or just certain chapters?
STACK_EXCHANGE
I will admit, the game originally showed promise and was rather Stanley Parable-esque, but it went down hill fast. It was mildly amusing - albeit slighlty painful - and continually expressed that it wasn't meant to actually be a game (so we can't exactly call it a bad game now, can we?) but instead an experiment. I doubt anybody will ever really know what said experiment was focussing on, but I do hope Anothink got the results he was looking for as the experience was bizarre, badly programmed and generally not very good. I mean seriously, what were those rooms with the Half-Life 2 Stalkers etc. about? They made no sense and were just not necessary in any way. The game is horribly designed. It's underlying mechanics are way too simple to make a compelling simulation, and work in a sometimes incomprehensible and illogical manner. The tools to properly work on titles are either not present or inaccessible at the stages they would normally be available at to game developers. Pressure builds in a silly manner due to misdesigned systems interaction (workforce management versus results in a stupidly compressed timeframe). If you **** up, you can basically start over, because correcting your mistakes so that they won't hurt you two hours down the line is nearly impossible. The progression is also painfully static. It basically forces you to replay, but doesn't offer any replay value to incentivize this. Overall, the game feels really horrible. It's unrewarding, the components are slapped together in a stupidly simplistic and obscure box with mediocre windowdressing. You can get hundreds of current games that have a WAY better value at this price. Weak. Fist, using a promo to get votes, lame. Second the wording of the big banner on the promo is made to look like anyone voting gets a free copy (purposefully deceptive). Just more less than honest tactics from a bad developer. I'll loose all respect for Desura and IndieDB if this guy wins anything. It can be quite fun but wow, it feels so unpolished and cheap and nasty. - No music or background sound most of the time. Silence. Feels really weird. - Some things don't have sound effects. E.g. hitting with an axe when there's nothing there. - Enemies are so stupid and move in the derpiest way imaginable (almost as if they are in an online game and you have lag). - Animation is somewhat primitive - The background tears nastily when scrolling - Can't redefine keys. "The narrative develops through the player's interaction with the game's world." I would like to better understand what interactions one can do? Until two thirds into the "experience" I could do nothing but move the mouse around an look. Sorry but I found this a complete waste of time. As I huge fan of Lunar Lander and the many games that drew inspiration from it I gotta say this game is a bit of a disappointment. No thrust/speed indication, no ship rotation, no fine tuned thrust control (it's digital, full off or full on), basically a fairly uninspired puzzle game. With work it has the potential to be fun, but right now I'd rather play lander or any of a number of other free Flash based games that do the concept much better I rarely if ever bother to write reviews but feel compelled to do so for this one in the hopes I save anyone on the fence the $4 entry fee to something that really doesn't seem to me to be worth the cost. Personally, while it seems pointless and frustrating, I think there is a deeper meaning to this. I think it is a statement saying how no matter how bad a game is, we play, and keep playing, it to level up, gain XP and unlock literally meaningless perks, it's our human nature, to accquire things. Not saying the game is bad persay, just interesting.
OPCFW_CODE
Does Stack Overflow support gracefully moving an off topic question somewhere more appropriate? There are quite a few "off-topic" questions on Stack Overflow that are flagged, stopped, voted down, and so on, which nevertheless have very useful information for me. Some have solved problems for me and I would even want to add a comment or reply. Is there a graceful way to migrate these threads somewhere else so that they may continue to live, without cluttering up the main Stack Overflow site with "off-topic" questions? yes, there are places where they can 'continue to live', your PC for example. Or if you prefer, you can put them in another website (giving proper attribution). Remember that "user contributions are licensed under cc-wiki with attribution required" Do you mean new questions or old? I ask because old questions get the historical lock. New questions l ike this are really bad for the site as a whole as described Stack Overflow: Where We Hate Fun We already have a feature known as migration. 3k users can vote to migrate offtopic posts to a certain subset of all the sites. Diamond moderators can migrate to any site. Just flag the post with a custom flag, saying "migrate to X.stackexchange.com". (List of all sites here) BUT: This doesn't mean that there is always a site to migrate to. The network clearly doesn't cover all topics, so many questions may just be off topic--no migration needed. If the question is a programming question and off topic on SO (and Programmers), then it is mort probably just off topic. SO+Programmers don't handle all programming questions, it is restricted by the faq. This applies to any destination migration site. Many a time, mods have to discuss with the mods of the destination site before migrating--so a question may not be migrated due to the site scope. For example, this may be on-topic on Gaming, but I highly doubt it. We do not migrate crap. Some questions may be on-topic elsewhere, but if they're not too good they don't get migrated. For example, this may be on topic for secutiry.se--but it is not-constructive, and thus won't be migrated. There are also questions which would be tolerated if asked on the destination site directly, would probably not be migrated if asked elsewhere. Like this one. Maybe on topic for our Unix&Linux site, but not too good. We also (generally) don't migrate old stuff Also, when in doubt, the Moderators ping the other site's moderators and ask them if they even want the question. Users should not cross post a post to multiple sites.
STACK_EXCHANGE
Insufficient prvileges for Revoke-AzureADUserAllRefreshToken I am trying to revoke the refresh tokens of a specific user (my own) in AzureAD to force a completely new logon to an applicaiton. As there is no UI option for this in the Azure Portal (there actually is -> see in one of the anwers) I am using the 'Windows Terminal's 'Azure Cloud Shell' option as follows directly from the built-in Azure Cloud shell: Connect-AzureAD PS /home/...> Revoke-AzureADUserAllRefreshToken -ObjectId "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" The GUID I pass in the parameteter is the object ID of my user. Unfortunately this fails due to a permission issue: Revoke-AzureADUserAllRefreshToken: Error occurred while executing RevokeUserAllRefreshTokens Code: Authorization_RequestDenied Message: Access to invalidate refresh tokens operation is denied. RequestId: fd5f5256-3909-46af-b709-8068e0744f25 DateTimeStamp: Mon, 09 Aug 2021 16:56:28 GMT HttpStatusCode: Forbidden HttpStatusDescription: Forbidden HttpResponseStatus: Completed If I try to execute the same in the Cloud Shell within the azure portal, the result is the same. If I use a 'classic' PowerShell, then it works. So apparently something is missing with the authentication of the Cloud Shell. When I log in I get to select the right tenant, and my read access e.g. to the user list works perfectly. I have no more clues what I would be missing: I am Owner of the subscription in the azure role assignments I do have the Global Administrator role assigned in AzureAD Is there some special command to 'elevate' the permissions? I tried to reproduce the issue on my Azure AD tenant , but unfortunately I didn’t receive the error you are getting . Note: Make sure you connect with AD with your Global Admin account i.e<EMAIL_ADDRESS>or<EMAIL_ADDRESS>so that you see the correct details in every column in the above red box. Other options : From Portal you can go to the user profile and click on revoke sessions . Using Graph Explorer you can revoke signin Sessions. Post https://graph.microsoft.com/v1.0//users/UserObjectID/revokeSignInSessions Reference: user: revokeSignInSessions - Microsoft Graph v1.0 | Microsoft Docs Thank you AnsumanBal-MT for the detailed answer. The "Revoke Sessions" button is a very good hint, I did not notice it so far, as I was so much focused on getting the CLI working. Logging in with the same admin user pressing this button was successful - so my user seems to have the permissions. Coming back to the terminal based access, when executing Connect-AzureAD in my case it does not give any output. Also if I try LogLevel info it does not write any log file, and using the Confirm option also did not give any prompt. I was trying this command in the azure portal's buil-in cloud shall, as well as with Windows Terminal's 'Azure Cloud Shell' option. I now installed the AzureAD cmdlet package fore classic PowerShell and use it there -> there I do get the expected output and also log files. I can confirm that that the logon users is correct, and the revoke works from there. Glad to hear it that it worked ! yes you are correct i have provided it for AzureAD cmdlet for classic powershell ..
STACK_EXCHANGE
On 16 March 2020, an abrupt change entered my daily life, as it did for many others in the United Kingdom. All of a sudden, instead of working in a laboratory fabricating microfluidic devices, operating single-molecule detection setups, and preparing for my weekly undergraduate teaching, I found myself scrambling to bag up everything I might need from my office so that I could be productive from home for an indefinite period of time. It seemed a bit surreal, only having been back in the country for 2 weeks after attending the Biophysical Society meeting in San Diego, California, and taking a short holiday in the United States to visit family, to suddenly be leaving again. However, as we all did, I packed up my things, and away I went. Three days later, I found myself shackled in the countryside of the English county of Hampshire, attempting to find a new way to progress my PhD without any ability to gather wet-lab data. No longer could I spend every day in the lab, doing experiments to study the molecular determinants of protein phase separation. I would have to become more creative about how to spend my time. Still, due to the rich nature of my scientific field, I found a way. Instead of pipetting solutions, I now spend my mornings pouring a second cup of coffee and reading a backlog of articles that I said I would come back to. These range from those directly implicated in my research in phase separation to more distant articles, such as how high-energy impacts from meteorites could have been the catalyst to forming early biologic covalent bonds. Instead of operating microfluidic devices, I find myself analyzing a copious amount of data, writing, editing, making figures, and rewriting, trying to piece together coherent stories to share with the scientific community. Instead of instructing undergraduates, I find myself engrossed in online trainings of how to fit reaction data and process biological images. Isolation has afforded me much time to reflect, particularly on why I decided to go into biophysics for my PhD, and specifically how lucky I am to be in this field at this chaotic time. It took something quite drastic for me to rise above the hustle and bustle of daily life in academia and sit back and think about the greater purpose of studying biophysics. For me, biophysics represents not a single field but a scientific way of thinking that uses the interwoven nature of all areas of science. It recognizes the intricate complexities of the natural world that extend beyond the pigeonholes of strictly defined disciplines. It is a field in which someone with a background in chemistry, such as myself, can easily branch out to learn about the in-depth fluid dynamics of proteins in the cytoplasm, and at the same time, learn lessons from the physics of polymer blends to better understand cohesive forces between biomolecules. This lack of boundaries comes to light for me each day, when I think about problems related to the expression of a chimeric protein, aimed at studying the spread of aggregates between cells at one moment, while the next moment, I am designing microfluidic devices for assaying the most fundamental thermodynamic properties of biochemical systems. Later on, I can be found developing models for how the translational friction coefficients of protein assemblies change during growth for different assembly geometries. Needless to say, one doesn't have to be locked away in the countryside to realize the interdisciplinary nature of their field, but it certainly does afford one ample time for such contemplation. After this time in isolation, I see profound changes happening in the way I am conducting my PhD. It is too easy to be bogged down in a single niche subfield when scrambling to complete a thesis, and I'm not sure it affords one the best preparation for life after graduation. Instead, I'll continue my isolation practices of reading diverse topics, planning experiments, and spending time to understand theories that are not only scientifically interesting but that also teach me new information and techniques to add to my scientific repertoire. At the end of the day, a PhD is about learning as much as you can, pushing the boundaries for continued knowledge gathering and improvement. Those of us in biophysics should consider ourselves lucky to not have strict boundaries, to be able to pursue vastly different realms of science under a central umbrella, and to never forget to keep branching out.
OPCFW_CODE
I agree with Karen: due to the sensitive info and the Summon API terms of service it's important to avoid exposing your credentials - same reason nobody probably has or will offer a public proxy to the Summon API as mentioned in the embedded email from Oct. 26. On the other hand, you can create a local proxy/JSONP web service in your language of choice and call it from JS - taking care to try and limit access to your service to your own JS files, etc. I can share our (nclive.org) PHP Summon API caller function (if PHP is a language you use), but it'll be better in a week or so. Still missing code comments, special char. escaping, etc. It just returns the native Summon format (change to XML or JSON) so one would need to add the GET parts and having it return JSONP with a JSON header, etc. to turn it into a local JSONP web service that talks to Summon behind the scenes. In the meantime, maybe someone else on this list has a more ready-to-share On Mon, Nov 3, 2014 at 9:05 AM, Karen Coombs <[log in to unmask]> > I don't know what the Summon API uses to authenticate clients. It looks > from the Python code like a key and secret is involved. You should be care > makes them available for anyone copy and use. > On Sun, Nov 2, 2014 at 4:12 PM, Sara Amato <[log in to unmask]> wrote: > > they can be constructed to use jsonp and avoid cross domain problems > > Subject: > > Re: Q: Summon API Service? > > From: > > Doug Chestnut <[log in to unmask]> > > Reply-To: > > Code for Libraries <[log in to unmask]> > > Date: > > Wed, 27 Oct 2010 11:56:04 -0400 > > Content-Type: > > text/plain > > Parts/Attachments: > > text/plain (45 lines) > > Reply > > If it helps, here are a few lines in python that I use to make summon > > queries: > > def summonMkHeaders(querystring): > > summonAccessID = 'yourIDhere' > > summonSecretKey = 'yourSecretHere' > > summonAccept = "application/json" > > summonThedate = datetime.utcnow().strftime("%a, %d %b %Y > > %H:%M:%S GMT") > > summonQS = "&".join(sorted(querystring.split('&'))) > > summonQS = urllib.unquote_plus(summonQS) > > summonIdString = summonAccept + "\n" + summonThedate + > > "\n" + summonHost + "\n" + summonPath + "\n" + summonQS + "\n" > > summonDigest = > > base64.encodestring(hmac.new(summonSecretKey, unicode(summonIdString), > > hashlib.sha1).digest()) > > summonAuthstring = "Summon "+summonAccessID+';'+summonDigest > > summonAuthstring = summonAuthstring.replace('\n','') > > return > > --Doug > > On Tue, Oct 26, 2010 at 6:46 PM, Godmar Back <[log in to unmask]> wrote: > > > Hi, > > > > > > Unlike Link/360, Serials Solution's Summon API is extremely cumbersome > > > use - requiring, for instance, that requests be digitally signed. (*) > > > > > > Has anybody developed a proxy server for Summon that makes its API > > > (e.g. receives requests, signs them, forwards them to Summon, and > > the > > > result back to a HTTP client?) > > > > > > Serials Solutions publishes some PHP5 and Ruby sample code in two API > > > libraries (**), but these don't appear to be fully fledged nor > > > easy-to-install solutions. (Easy to install here is defined as an > > average > > > systems librarian can download them, provide the API key, and have a > > running > > > solution in less time than it takes to install Wordpress.) > > > > > > Thanks! > > > > > > - Godmar > > > > > > (*) http://api.summon.serialssolutions.com/help/api/authentication > > > (**) http://api.summon.serialssolutions.com/help/api/code > > > nitaro74 (at) gmail (dot) com "Hope always, expect never."
OPCFW_CODE
The plastic surgery face database is a real world database that contains 1800 pre and post surgery images pertaining to 900 subjects. For each individual, there are two frontal face images with proper illumination and neutral expression: the first is taken before surgery and the second is taken after surgery. The database contains 519 image pairs corresponding to local surgeries and 381 cases of global surgery (e.g., skin peeling and face lift). The details of the database and performance evaluation of several well known face recognition algorithms is available in the paper mentioned below. The list of URLs is compiled in a text file along with a tool to download the images present at these URLs. The tool will download the images and store them at the specified location. - Text file containing the URLs (7KB) (CRC32: D132C4A2, MD5: 0FBE3041D95FEE000CAF263048B52480, SHA-1: 325AAED6F31E4AE1471DE44F91A9BC2B63B0AAFD) - Tool to download the images (11KB) (CRC32: 2548A7C1, MD5: 79DF4CF8D12B724DCB6B54827C9C9738, SHA-1: 44731E292E05FFE683C2BB5BC1A67216ACADF2DC) - To obtain the password for the compressed file, email the duly filled license agreement to [email protected] with the subject line "License agreement for Plastic Surgery Face Database". NOTE: The license agreement has to be signed by someone having the legal authority to sign on behalf of the institute, such as the head of the institution or registrar. If a license agreement is signed by someone else, it will not be processed further. This database is available only for research and educational purpose and not for any commercial use. If you use the database in any publications or reports, you must refer to the following paper: - R. Singh, M. Vatsa, H.S. Bhatt, S. Bharadwaj, A. Noore and S.S. Nooreyezdan, Plastic Surgery: A New Dimension to Face Recognition, In IEEE Transaction on Information Forensics and Security, Vol. 5, No. 3, pp. 441-448, 2010. Disclaimer: The images in the plastic surgery database are downloaded from the internet and some of the subjects appear on different websites under different surgery labels. Therefore, this database may have some repetition of subjects across different types of surgeries. We have figured out multiple cases with such inconsistencies and provided an errata. If you come across some other cases as well, kindly report it to us. Please find the text file of errata. The images seperated by comma(,) represent the redundant entries in the database.
OPCFW_CODE
The Speccies - ZX Spectrum The creation process - Part 1 Last week, the game I was working on "The Speccies" was released. It was released as a free digital download for the ZX Spectrum and was also available as a limited cassette copy. You get more details about the game, including the download, from the Tardis Remakes website. Go download it now! I'll go into how I went about creating the physical copies of this game in a separate article. When I was first asked to do the graphics for this game away back in February, I knew nothing of the game it was based on - The Brainies/Tiny Skweeks, which was released on just about every format other than the ZX Spectrum - and at this point "our" version was still going to be called "The Brainies". I was keen to work on another Spectrum game that would actually be released, having been involved in a few others that just faded away. I looked at the graphics from the DOS version that were sent to me and Søren, the coder of the coder of the game, may now be surprised to know that my heart sank when I saw those graphics. I still had no idea how the game played and I thought the DOS graphics were terrible. Having a look online, the SNES and Amiga versions weren't much better. Now, top-down graphics can be quite difficult to do, but when you are limited to 2 colours and 16x16 pixels, things start to get a little tricky. Doing a top-down graphic of a Brainie walking wouldn't be very exciting both visually to the player and to me, the person creating the graphic. I decided that I'd have the Brainie roll. Yeah, roll. That'd be fun to do! The number of frames to animate a sprite on a ZX Spectrum can be limited due to the small amount of memory available. 4 frames would be no good, but if I could use 8 frames, then I thought it'd look good. Thankfully, I was told, "Sure, 8 frames is no problem. It could make things easier if everything had 8 frames." I'm paraphrasing, of course. Off I went, to create a sprite that I was determined to have something of my design....and this is what I came up with. I know, pretty rubbish, right? Not only was the character itself pretty uninspiring, but the frames for the rolling just weren't right either. I took a step back from the computer and went about sketching each frame on paper using pencil and then I'd film it using the Vine app on my iPhone. I would kill some time on a Friday night, at least. Not only was this an exercise in getting each of the 8 frames needed for rolling, but also to try out an slightly new design fora Brainie. When I create a character, it's all about proportion and I felt I now had that right. I still felt that there wasn't enough on the Brainie itself, so when it came to creating the character in pixels, I highlighted it's face. This was also a way of showing that it was rolling rather than it's eyes and feet looking like they are spinning. And after only 3 iterations (the 2nd displayed up there and the 1st is almost identical to the 2nd other than having no "brow") I got the character design right and it never changed. I had the sprites for moving down only, though, therefore looked into frames for it turning around - couldn't manage it due to the 16x16 pixel limit - always facing down but rolling backwards when going up and always facing but rolling sideways left and right - again the 16x16 pixel limit proved this impossible. I even considered making the character smaller by 1 pixel an all sides, but then it actually lost character, so quickly abandoned that idea. In the end, we compromised. When he was static, he always faced you and selecting which way to move, he'd face that direction and roll. I'd add more facial expressions and movements to it later on in development, but I was feeling good. I was also starting to get into the game, especially since it was similar to a game I'd just been played and loved on my iPhone, "Squarescape". I had got, what I thought, was an aces animated character that really looked like it was rolling properly and I assumed the difficult part was over. There wasn't much else to be animated and surely the other sprites/tiles would be relatively easy. How wrong was I! It turns out this graphic was one of the easiest.
OPCFW_CODE
So apparently it's illegal to talk about technology difficulty here? I tried creating a question asking what work is involved in creating a facial recognition app and got recommendations on my life and how to spend my free time... I want to talk about what kind of libraries are available to do a certain thing within Android and what kind of difficulty a task is and I get mundane irrelevant answers ending in my thread being closed? I guess here's another Exchange group that should be forgotten. Out of 5 answers, not a single one was relevant to my question. It entirely is possible your question is offtopic here. Can you post it here so we can try to help you? Possible duplicate of Green fields, blue skies, and the white board - what is too broad? @MetaFight for your convenience here is a screen shot of deleted question @Snowman Nope.. Q & A, of the type that we practice here at Stack Exchange, rests on a few fundamental principles and assumptions: Each site has a specific subject matter area. Questions asked on Stack Exchange must fit the subject matter of the site you post them on. The questions you ask must have a well-defined scope; that is, they must be specific enough to be answerable. What that means in practice is that you can't ask questions that solicit opinions, ask for a list of things, make product recommendations, or are too broad. There are very good reasons why we follow these principles. If you've ever tried to get an answer to one of your questions in a forum environment, you already know why we have these conditions: it's very nearly impossible to get a decent answer on a forum. In short, forums suck. So we do everything we can to avoid those forum behaviors that prevent people from getting good answers to their questions. This reduces the noise and is more attractive to those subject matter experts who are here to provide answers to your specific questions. If you are not aware of these principles, or fail to follow them for whatever reason, you're going to have a very hard time participating anywhere on Stack Exchange. I followed said principles. If you had, your question would not have been closed. Yes, that's why this is such a weird thing for the mods to do. I guess I can't expect them to do a good job nowadays. @insidesin: You sound like a criminal who was caught red-handed with a stolen wallet, trying to tell the police that the guy just gave him the wallet. Your questions were not OK for this site, period. We do not allow questions that ask "what libraries are available" for something. We do not allow allow open-ended questions like "how to create facial recognition software". You can try to pretend that your questions are on-topic for the site, but our rules are quite clear that they are not! @insidesin: Again, this isn't your own personal soapbox. If you want to do that, take it somewhere else. @insidesin: "I didn't ask what libraries were available. I asked if there were libraries to make it easier." Same difference: you were asking for libraries. That's not an acceptable question for this site. @NicolBolas I wasn't asking for libraries. I was asking if there were libraries. It's a yes or no answer. @RobertHarvey If I want to do what? Ask relevant questions to do with software technologies? @insidesin consider giving a read to Question closed because yes/no answer @gnat That wasn't the question of my post though. That was just some idea or direction in how I want it answered. @insidesin: "that's why this is such a weird thing for the mods to do." – No moderators were involved in either the closure or the deletion of your question. Please, get your facts right. @JörgWMittag sorry, accidental forum expression. It's unfortunate that you aren't familiar with forums either. What you were asking for is a topic where a useful answer could fill a whole book. Such questions are usually considered as too broad for the Q&A format of this site. Note that facial recognition is still a topic of scientific research. You argued it is not too broad, since you only want to know how much work is involved. However, that is nothing strangers on the internet can tell you, because we do not know you, your background, what exactly the features are you imagine for your program, or the quality of the facial regcognition you expect. Each of these details can easily make a difference factor of 20 or more in the resulting effort. So even if your question would not have been closed as "too broad", it should have been closed as "primarily opionated". Moreover, it is not "illegal" to talk about technology here. Almost any question on this site has a specific technological context. However, we do not give any technology or library or tool recommendations, and questions asking for such a recommendation are typically closed quickly by the community, too. See also: Why was my question closed or down voted? here is the link (the question is deleted but you can access it with your rep level): What work is involved in creating a mobile filter/face-swap style program? @gnat: thanks, I should really learn how to use the query features of this site to find such a question by myself :-) I didn't query - just checked 10K tools page and found it in recent deleted questions How can that fill a whole book? It's asking about the technology of one very specific method. I guess I should just face the fact that people don't like advanced topics here. @insidesin: http://www.springer.com/in/book/9780857299314 http://www.intechopen.com/books/face_recognition @DocBrown https://www.amazon.com/How-Spot-Liar-People-Truth/dp/1564148408 @DocBrown https://www.amazon.com/Liespotting-Proven-Techniques-Detect-Deception/dp/0312611730/ Boy, being useless is fun! @insidesin: Note that meta is here to give you the ability to provide feedback or get answers to your questions about the SE platform. It's not your personal soapbox; rants are not really welcome here. @insidesin: "Boy, being useless is fun!" He's not being useless. You asked how facial recognition could require a whole book. He demonstrated how it can by showing you several books about facial recognition. You lost the argument. @insidesin: kid, if you don't like what is on- or off-topic on this site, better ask your questions somewhere else. @RobertHarvey Good to know, when I start ranting you can shut me up. Til then it'd be nice if you stopped spitting irrelevance. @NicolBolas There was no argument. Everything can fill up a whole book, now if all you're going to do is be facetious, please stop and refrain from wasting my time. @DocBrown "kid" ? I'd love for this site to be worth something, but a simple test has shown that it unfortunately is not.
STACK_EXCHANGE
Why register your printer? These games are not shown due to an attack on the machine at the time reported by Security in 2012. Both phones are expected to be more popular than table and slot games on various sites. To download the Telltale game, bring your six-part gamble and run the desktop application. We find the audio quality to be the performance to handle most applications well. Maybe it’s just the slots that are loose or tight that make the audio quality ridiculous. One of the cases when video poker slots are bonus rounds. Bootcamp comes with hardware video acceleration and our destination is London this morning. 14.13 lots of internal memory is not a good enough reason to delay the hardware further. No one wants to scribble a little more while still being pretty soft. As for performance, the R705 takes a pure video playback engine and adds a lot of features. This information allows him 16 or 20 hours of standard definition video on other machines. 17:36 We can see the extended information, but everyone wants the battery life to be short as well. We plead and plead, but the creators of this system seem to have found information that proves otherwise. They are currently compatible with our Mac Pro test system, but that’s not all. But unlike the Dell Venue 8 Pro, for example, an accelerometer-protected hard drive for security-conscious users. Apple Retina Display Macbook Pro has a different four-way controller depending on who you ask. This happened on the previous generation 27″ iMac, and the same thing happened to the iMac with Retina display. As Mrijaj says, tight lock screens, for example, have this rich color rendering. I actually declined both offers, but the screen refresh rate increased. An additional advantage is that Google works this way or at least 7 minutes or his two lenses placed diagonally to each other, three sticks and a small notch iPhone 12.7 either with a jackpot . The condition can be cut considerably with one from 2 months ago. The downside is that many of them can be played in demo mode. However, a Bluetooth connection can match sound effects and flashing lights or noises. Unsurprisingly, the narration is by Academy’s Alex Pritchard, with the rest on loan to Hoffenheim.Next, the paytable section lists Atlantic City as the classic controller. So try to end the overhead to keep the old slots in the new model. For example, the higher the slot machine success rate, the higher the payline. Halo69 Slot kembali kebanyakan orang akan. Dell XPS akan cocok dengan. Koalisi pemalsuan di mengatakan Las Vegas sangat diatur untuk menjamin tingkat tertentu. 11: 07 di katakan beberapa kompak atau bahkan sementara kita tidak tahu. Pelanggan kasino California, ini adalah beberapa saluran listrik pertama yang cocok dengan ulasan kami tentang ini. Of only two improvements, it represents a massive upgrade to the phone. Players first start broadcasting to each other via Bluetooth, freeing up USB ports.
OPCFW_CODE
The master_2020 version can only be loaded over a database that was previously updated to master_2019.2. So please make sure you are running on the most current version of master_2019.2 before you upgrade to master_2020. Note that an error will be reported if an attempt is made to update from a pervious version. Needless to say, master_2020 is a major update for SimpleInvoices. For one thing, it requires that you are running on a version of PHP 7.4 or greater. Although it might work on previous versions, there is no guarantee that will always be the case. The benefit of of PHP 7.4 is faster processing and greater security. From a development perspective it provides better features to use in development of applications such as SimpleInvoices. The most major change made in the master_2020 is the removal of Zend Framework 1 libraries. Areas affected by this change include: - Session handling - Access Control List (ACL) logic that determines what you can access based on the user type you are logged in as - Formatting libraries used for numbers, currencies and dates - Application logging - Access to configuration file settings Additionally, this update uses Composer and Node for vendor and jQuery library maintenance. This change helps automate the process of keeping support libraries up to date. master_2020 has an updated report generation system. In previous versions, only the Statement of Invoices report presented buttons to Print, Export to PDF, XLS or DOC files and an Email option. The new report framework supports these options for all reports. Also, the presentation of the reports was standardized so the look of all reports is the same. There are two configuration files in master_2019.2: config.php & custom.config.php. In master_2020 these are changed to the .ini files: config.ini & custom.config.ini. The “config” file contains all key/value settings that are needed to configure SimpleInvoices and is maintained as part of the SimpleInvoices code. The “custom.config” file is the runtime version that contains the same “config” file keys but with values set for your implementation (DB name, user & password, etc). A process is included in the master_2020 update to convert your “custom.config.php” file to the new “custom.config.ini” file format. Update to master_2020 from master_2019.2 - Save a copy of your “custom.config.php” file located in the config directory for use later in this process. - Export your full database using phpMyAdmin or whatever administration tool you use. - Backup your full SimpleInvoices implementation. Make a .zip or .gzip copy of your full SimpleInvoices directory path. Include the database extract from step 2 in the backup file. - Save the backup file in a directory separate from your SimpleInvoices directories. - If you have developed your own extension or custom hooks, they will be included in the backup file. - Download the “master_2020” version of SimpleInvoices. - Delete your SimpleInvoices root directory. This includes all sub-directories within it. - Extract the content of the downloaded “simpleinvoices-master_2020.zip” file into the “document root” directory of your webserver. - Rename the new directory, “simpleinvoices-master_2020” to the name of the SimpleInvoices directory you deleted in step 7. - Copy the “custom.config.php” file saved in step 1 into your SimpleInvoices “config” directory. - Using a text editor (notepad, Notepad++, etc.), open the file, “si2020Converter.php” file located in your SimpleInvoices root directory, and change the setting of the “$secure” variable on line 2, from “true” to “false“. Save the file. - In your browser run this program. For example, if your root directory is “simple_invoices” then in the browser address line enter, “simple_invoices/si2020Converter.php“. If this runs successfully, a green result line will be displayed. This program makes the new “custom.config.ini” file from the content of the old “custom.config.php” file. - Now in your text editor used in step 11, change the “$secure” setting back to “true” and save the file. You can also delete the old “custom.config.php” file from the “config” directory. Proceed to the, First Use Of Update, topic instructions below. - Select the Backup Database option on the Settings tab and follow the instructions to backup your database. This will store the backup in the tmp/database_backup directory. You can leave the backup there. - Rename your SimpleInvoices directory to something like, simple_invoices_yyyymmdd_b4_update. The rename moves all content of your current SimpleInvoices directories is preserved for easy recovery if needed. Update Installation Instructions Follow these steps to complete your update: - Make sure you do what it says in the Backup First topic. - Recreate the directory that your current SimpleInvoices was installed in that was renamed from in the Backup First step. We will call this the SimpleInvoices directory. - In your browser, download the “master_2019.2” version for PHP 7.2 and greater, or the “master“ version for PHP 5.6, 7.0 or 7.1 from the Clone or Download button on that page. - Unzip the download file. It will create a directory named the same as the zip file (assuming you didn’t rename it); typically, simpleinvoices-master. - Copy the content of the directory created by the unzip process into the directory created in step 2. - Copy the config/custom.config.php file from your previous SimpleInvoices directory and save it in the config/ directory of the new SimpleInvoice installation directory. Changes to the new version of the config/config.php file will be automatically added to the new copy of the config/custom.config.php file. - If you have your own business invoice template, copy your company logos from the backup template/invoices/logos directory to the updated install template/invoices/logos directory. Next copy the directory your business template is in, from the template/invoices directory to the template/invoices directory. Proceed to the next topic, First Use Of Update. First Use Of Update - Access the updated SimpleInvoices site. If authentication is enabled, log in as your normal administrative user. - If there are NO database updates (aka patches) to perform, just start using SimpleInvoices. - If there ARE database updates, you have two quick actions to perform. - You will be on the patch page at this point. This page lists all SimpleInvoices patches; both applied and unapplied. Scroll down the list to see what unapplied patches are pending. They are at the bottom of the list. Scroll back to the top of the list and select the button to apply the patches. - You will now be on the page that lists all the patches, showing that they have all been applied. - Click the button on the applied patches review page and begin using your updated SimpleInvoices. NOTE: If the patch process reports an error for foreign key update, refer to the Foreign Key Update error section below. If You Have Special Code - Custom Hooks – These are changes made to the hooks.tpl file in the custom directory. You need to transfer these changes to the same file in the new installation. Verify the are current and work for the newly installed version. - Extensions – Extensions are the proper way to add new functionality to SimpleInvoices. You will need to copy the directory containing your extension to the extensions directory of the new install. You will then need to review your extension code to make sure it is current for any changes to the standard files that need to be incorporated into your extension file. Test your extension to make sure it functions correctly. - Changes to the standard code – Hopefully you kept copious notes and comments on these changes because you have to track them down and implement them in the new version. HOWEVER, when you incorporate it into the updated version do it as an Extension or via the Custom Hooks. Then your life will be more simple the next time you update. Test your changes and you are ready to us the updated version of SimpleInvoices. Unable to set Foreign Keys Error Handling One of the major changes with master-2019.2 is the implementation of foreign key support in the database. This replaces the partial support in the code in prior versions. If you want to know more about foreign key support, please refer to this topic in the How To … menu option. Foreign key support is implemented in patch #318. If you get the error, “Unable to set Foreign Keys,” the update process will stop after applying all patches up to #318 and will report pertinent error information in the tmp/log/php.log file. Look in this file to see what error(s) have been found. The first part of the error information is an explanation of what has been found. Here is the explanatory text: Unable to apply patch 318. Found foreign key table columns with values not in the reference table column. The following list shows what values in foreign key columns are missing from reference columns. There two ways to fix this situation. Either change the row columns to reference an existing record in the REFERENCE TABLE, or delete the rows that contain the invalid columns. To do this, the following example of the SQL statements to execute for the test case where the ‘cron_log’ table contains invalid values ‘2’ and ‘3’ in the ‘cron_id’ column. The SQL statements to consider using are: UPDATE si_cron_log SET cron_id = 6 WHERE cron_id IN (2,3); —- or —- DELETE FROM si_cron_log WHERE cron_id IN (2,3); The pertinent information to your system then follows in a table that displays all the information you need to correct the error. The following example shows a case where there are orphaned si_invoice_items table record(s) relative to the invoice_id column with a value of “1” that ties back to the id column of the si_invoices table. Here is the example of this: invoice_items invoice_id invoices id 1 Using this information, you can decide to perform an UPDATE or a DELETE to resolve the orphaned records after reviewing your database records. In this case, the likely decision is to delete the orphaned records from the si_invoice_items table. Using the DELETE example above, the SQL command you construct would be: DELETE FROM si_invoice_items WHERE invoice_id = 1 After resolving the foreign key errors, access your SI application again to complete the update process. Note that the table shown for the FOREIGN KEY TABLE column is “invoice_items” but the delete command references the “si_invoice_items” table. This is because the “si_” prefix is automatically added by the database SQL build logic and the application only knows the “invoice_items” part of the table name.
OPCFW_CODE
The Easiest Productivity Hack of All Time By Alan Henry / LifeHacker Getting stuff done is hard, especially if you are self-employed or need to do things for yourself that you usually put off, like paying bills. There always seems to be something else to do: a drawer that could be organized, a phone call to your sister or checking flight prices on a trip you have no intention of taking. Enter the Pomodoro Technique. This popular time-management method can help you power through distractions, hyper-focus and get things done in short bursts, while taking frequent breaks to come up for air and relax. Best of all, it’s easy. If you have a busy job where you’re expected to produce, it’s a great way to get through your tasks. Let’s break it down and see how you can apply it to your work. We’ve definitely discussed the Pomodoro Technique before. We gave a brief description of it a few years back, and highlighted its distraction-fighting, brain training benefits around the same time. You even voted it your favorite productivity method. However, we’ve never done a deep dive into how it works and how to get started with it. So let’s do that now. What is the Pomodoro Technique? The Pomodoro Technique was invented in the early 1990s by developer, entrepreneur, and author Francesco Cirillo. Cirillo named the system “Pomodoro” after the tomato-shaped timer he used to track his work as a university student. The methodology is simple: When faced with any large task or series of tasks, break the work down into short, timed intervals (called “Pomodoros”) that are spaced out by short breaks. This trains your brain to focus for short periods and helps you stay on top of deadlines or constantly-refilling inboxes. With time it can even help improve your attention span and concentration. Pomodoro is a cyclical system. You work in short sprints, which makes sure you’re consistently productive. You also get to take regular breaks that bolster your motivation and keep you creative. How the Pomodoro Technique works The Pomodoro Technique is probably one of the simplest productivity methods to implement. All you’ll need is a timer. Beyond that, there are no special apps, books, or tools required. Cirillo’s book, The Pomodoro Technique, is a helpful read, but Cirillo himself doesn’t hide the core of the method behind a purchase. Here’s how to get started with Pomodoro, in five steps: That “longer break” is usually on the order of 15-30 minutes, whatever it takes to make you feel recharged and ready to start another 25-minute work session. Repeat that process a few times over the course of a workday, and you actually get a lot accomplished -- and took plenty of breaks to grab a cup of coffee or refill your water bottle in the process. It’s important to note that a pomodoro is an indivisible unit of work -- that means if you’re distracted part-way by a coworker, meeting, or emergency, you either have to end the pomodoro there (saving your work and starting a new one later), or you have to postpone the distraction until the pomodoro is complete. If you can do the latter, Cirillo suggests the “inform, negotiate and call back” strategy: Of course, not every distraction is that simple, and some things demand immediate attention -- but not every distraction does. Sometimes it’s perfectly fine to tell your coworker “I’m in the middle of something right now, but can I get back to you in... ten minutes?” Doing so doesn’t just keep you in the groove, it also gives you control over your workday. How to get started with the Pomodoro Technique Since a timer is the only essential Pomodoro tool, you can get started with any phone with a timer app, a countdown clock, or even a plain old egg timer. Cirillo himself prefers a manual timer, and says winding one up “confirms your determination to work.” Even so, there are a number of Pomodoro apps that offer more features than a simple timer offers. Who the Pomodoro Technique works best for However, it’s also useful for people who don’t have such rigid goals or packages of work. Anyone else with an “inbox” or queue they have to work through can benefit as well. If you’re a system’s engineer with tickets to work, you can set a timer and start working through them until your timer goes off. Then it’s time for a break, after which you come back and pick up where you left off, or start a new batch of tickets. If you build things or work with your hands, the frequent breaks give you the opportunity to step back and review what you’re doing, think about your next steps, and make sure you don’t get exhausted. The system is remarkably adaptable to different kinds of work. Finally, it’s important to remember that Pomodoro is a productivity system -- not a set of shackles. If you’re making headway and the timer goes off, it’s OK to pause the timer, finish what you’re doing and then take a break. The goal is to help you get into the zone and focus -- but it’s also to remind you to come up for air. Regular breaks are important for your productivity. Also, keep in mind that Pomodoro is just one method, and it may or may not work for you. It’s flexible, but don’t try to shoehorn your work into it if it doesn’t fit. Productivity isn’t everything --it’s a means to an end, and a way to spend less time on what you have to do so you can put time to the things you want to do. If this method helps, go for it. If not, don’t force it. (at) mindpowernews.com / Privacy
OPCFW_CODE
How to build a hierarchical view of inherited classes in Python? This is a question I tried to avoid several times, but I finally couldn't escape the subject on a recent project. I tried various solutions and decided to use one of them and would like to share it with you. Many solutions on internet simply don't work and I think it could help people not very fluent with classes and metaclasses. I have hierarchy of classes, each with some class variables which I need to read when I instantiate objects. However, either these variables will be overwritten, or their name would be mangled if it has the form __variable. I can perfectly deal with the mangled variables, but I don't know, with an absolute certainty, which attribute I should look in the namespace of my object. Here are my definitions, including the class variables. class BasicObject(object): __attrs = 'size, quality' ... class BasicDBObject(BasicObject): __attrs = 'db, cursor' ... class DbObject(BasicDBObject): __attrs = 'base' ... class Splits(DbObject): __attrs = 'table' ... I'd like to collect all values stored in __attrs of each class when Instantiate the Splits class. The method __init__() is only defined in the class BasicObject and nowhere else. Though, I need to scan self.__dict__ for mangled __attrs attributes. Since other attributes have the pattern attrs in these objects, I can't filter out the dictionary for everything with the pattern __attrs in it ! Therefore, I need to collect the class hierarchy for my object, and search for the mangled attributes for all these classes. Hence, I will use a metaclass to catch each class which calls __new__() method which is being executed when a class definition is encountered when loading a module. By defining my own __new__() method in the base class, I'll be able to catch classes when each class is instantiated (instantiation of the class, not an object instantiation). Here is the code : import collections class BasicObject(object) : class __metaclass__(type) : __parents__ = collections.defaultdict(list) def __new__(cls, name, bases, dct) : klass = type.__new__(cls, name, bases, dct) mro = klass.mro() for base in mro[1:-1] : cls.__parents__[name] = mro[1] return klass def __init__(self, *args, **kargs) : """ Super class initializer. """ this_name = self.__class__.__name__ parents = self.__metaclass__.__parents__ hierarchy = [self.__class__] while this_name in parents : try : father = parents[this_name] this_name = father.__name__ hierarchy.append(father) except : break print(hierarchy) ... I could have access attributes using the class definition, but all these classes are defined in three different modules and the main one (init.py) doesn't know anything about the other modules. This code works well in Python 2.7 and should also work in Python 3.. However, Python 3. have some new features which may help write a simpler code for this kind of introspection, but I haven't had the time to investigate it in Python 3.0. I hope this short explanation and example will save some of your (precious) time :-) I think your answer should go in the.. answers. Yes, you're absolutely right ! But I don't know how to directly post an "answer" :-) Yes, the question is the answer; simply because I couldn't find anything other than the "Ask Question" button on the site. Did I miss something ?
STACK_EXCHANGE
Microsoft Visual Studio 2008 Sp2 Microsoft Visual Studio These are to be started with a different executable. We would love to hear from you! Experience new ways to collaborate with your team, improve and maintain your code, and work with your favorite repositories, among many other improvements. Note that your submission may not appear immediately on our site. If Visual Studio Professional or higher was already installed on the machine, LightSwitch would integrate into that. Consequently, one can install the Express editions side-by-side with other editions, unlike the other editions which update the same installation. Visual Studio Subscriptions. The integrated debugger works both as a source-level debugger and a machine-level debugger. Use the coupon code to avail the discount. Previously, a more feature restricted Standard edition was available. If you have any feedback, please tell us. Your message has been reported and will be reviewed by our staff. There is even a link to verify the installatio n oif dot net in one of the posts. To download Visual Studio for Mac, see visualstudio. Visual Studio includes a code editor supporting IntelliSense the code completion component as well as code refactoring. Its focus is the dedicated tester role. Hi Tariq, Could you please provide us with a screenshot to take a look? Some exclusions are often applied, so please check the coupon before you apply online or use it in store. Visual Studio 2008 Sp2 Download Do you have any video on how can i get the discount. Microsoft released Visual Studio. Any tools and programming languages that run inside the Visual Studio Shell integrated mode will run together with Visual Studio Standard and above if they are also installed on the same machine. Analysis Reporting Integration Notification. The parameters to the method are supplied at the Immediate window. It includes updates to unit testing and performance. Quick Search supports substring matches and camelCase searches. To download Microsoft Visual Studio Code, eric benet news for you see code. Microsoft Visual Studio Shell integrated mode Redistributable Package provides the foundation on which you can seamlessly integrate tools and programming languages within Visual Studio. Microsoft started development on the. It can produce both native code and managed code. This can rule out the possibility of corrupted user profile. It does not include support for development or authoring of tests. This section needs expansion. Filter on the process name explorer. Visual Studio Development. All languages are versions of Visual Studio, it has a cleaner interface and greater cohesiveness. Community developers as well as commercial developers can upload information about their extensions to Visual Studio. Use the coupon code online at checkout. Visual Studio System Requirements LightSwitch is included with Visual Studio Professional and higher. You can also find some huge discounts on Women's lingerie and stack one of the below codes with that. The various product editions of Visual Studio are created using the different AppIds. IntelliSense, debugging and deployment capabilities to build. In Visual Studio onwards, it can be made temporarily semi-transparent to see the code obstructed by it. Sort Date Most helpful Positive rating Negative rating. By late the first beta versions of. Be Agile, unlock collaboration and ship software faster. Considering it is still on it's first beta which I have on a laptop I don't see it being release this year. What do you need to know about free software? Administrator rights are required to install Visual Studio. Write your code fast Debug and diagnose with ease Test often, release with confidence Extend and customize to your liking Collaborate efficiently. No problem about the english. It is aimed for development of custom development environments, either for a specific language or a specific scenario. Pros programming with databases Cons none of course Summary none of course. Somasegar and hosted on events. For Hyper-V emulator support, A supported bit operating system is required. - Video maker windows 7 free download - Big book audio mp3 free download - Cricket games 2007 from ea sports com for free download - Sharayet movie free download - J moss v4 - Samba movie songs free download - Online education websites templates free download - Darlene zschech album - Western union translink software free download - Pawan kalyan premalo paddadu short film free download - Pool billard - Able full length movie - Auto tuner no - Missy elliott - get ur freak on mp3 free download - Namastey london background music free download
OPCFW_CODE
It’s getting rather cold in UK and I am dreading the winter and the darkness. However, on the positive side it is cozier to sit and code in front of the computer with a lovely cup of tea or coffee. And here is some code. I was asked on Twitter to post about adding contacts for Windows Store and Windows Phone when working on a Universal App, and frankly contacts and the various APIs around that confuses me (so I hope I got it right). This is one of the areas where we don’t have full convergence yet between the two targeted devices, some types are available for one platform (such as the ContactManager class which at the time of typing is only available for Windows Store). To add to the confusion there is the concept of a contact store, an in-app contacts keeper we could call it, which is available on Windows Phone only. After reading the documentation up and down I ended up with the code below for adding contacts to the People app (or hub as it can also be called) for Windows Phone and Windows Store. Whatever you want to do outside of the app container has to be done through either special permissions (- and capabilities declared) when they exist, or through a broker model that basically hands over the decision making over to the user. In the code example below the Store application creates a contact, then opens up a dialog with the details and the user can either take direct actions on the details (send an email for example), or add the contact (unless already added)- from where the user can find the contact details in the People hub/app. For Windows Phone the contact is added directly and can afterwards be accessed in the People hub. The app, if you are curious, is the app I’ve used for the last few Optical Character recognition blog posts and it simply takes an image, grabs the text and layout information and with some regex and layout information trickery (logic) creates a contact. Don’t forget to add Contact as a capability for Windows Phone in the manifest file BTW! if (Contact == null) return; var contact = new Contact FirstName = Contact.Name var homeEmail = new ContactEmail Address = Contact.Email, Kind = ContactEmailKind.Work var workPhone = new ContactPhone Number = Contact.PhoneNumber, Kind = ContactPhoneKind.Work ContactManager.ShowContactCard(contact, new Rect(), Placement.Above); var contactStore = await Windows.Phone.PersonalInformation.ContactStore.CreateOrOpenAsync( var contact = new StoredContact(contactStore); var contactDetails = await contact.GetPropertiesAsync(); For Windows Store there is also the CurrentPickerUI which (when used with a Contact contract) lets the user use your app to select contacts in a similar fashion as the Share Contract target and source works. Alright, let me know if I’ve missed something here or something, I can’t wait until Windows Store and Windows Phone become one and the APIs are a bit clearer in the way they work and what they do. Still love it though, the platform.
OPCFW_CODE
Greetings fellow GOSHers, My name is Gideon, and I was introduced to GOSH last year through AfricaOSH during my participation in the OpenFlexure Microscope workshop in Ghana. I am currently a student at Kwame Nkrumah University of Science & Technology in Kumasi, pursuing a degree in Biomedical Engineering. I am reaching out to fellow GOSH members for assistance with my final project. A little background on my project. The aging population in Ghana and Africa, coupled with the prevalence of conditions like ALS, Parkinson’s disease, and others affecting independent functioning among the elderly and people suffering from nervous system disorders, highlights the crucial need for assistive devices. The prolonged time taken to perform essential activities of daily living, particularly eating, underscores the necessity for designing an automated feeding system for the elderly and individuals with nervous system disorders. Additionally, the significant emigration of Ghanaian nurses, to America and Europe, who serve as primary caregivers for the elderly and people suffering from nervous system disorders, presents a substantial threat to their quality of life and independence. Hence, the urgency for developing assistive devices. I aim to create a portable device capable of scooping food from a bowl and transporting it to the user’s mouth without requiring physical contact with either the bowl or the device. I would greatly appreciate assistance on integrating microcontrollers into the hardware to achieve the intended functionality of the device. Guidance on suitable software for designing a model for the project would be invaluable as well. Your suggestions and contributions to this endeavor are highly appreciated. Thank you. Best regards, Gideon I love the intention with your project to use tech for meaningful practical purpose here. I work with #techForGood makers and volunteers to bring similar solutions for persons with disabilities here in Singapore, I find that every device / automation system we build eventually needs to be customized and personalized to the persons specific needs. So our approach is to design for one single use case instead of trying to design an automation system that could work for many. Following this approach seems strange to most people who think of factory production as a “default” and assume it’s more expensive. However, since we’re using open-source design, and consumer level production like 3d printing, laser-cutting, and hand-crafting (usually the best tech) we can make things that fit and work better than mass production of expensive customizable assistive devices. My recommendation is to invite the user and their caregivers into the design process of the assistive device. When we design iteratively with smaller and simpler little prototypes we often find the user needs are such that we don’t really need complex automation but something that can be self-maintained as well as it’s designed usage. A question we ask of complex electronic devices is: how does it affect the user when it breaks or stops working? Is the user able to self-fix? Or do they now rely on someone else? Is that okay? Can the caregiver handle the support? Bringing the intended user of the assistive device and the caregivers into the process allows for these questions to be asked and understood along with the development of prototypes. They don’t need to be design or tech people to share ideas and sketch out little drawings that make the prototyping process meaningful. That said, you’ll find more about the process and devices ive been working on here: Makerspace in a Library in Singapore While I don’t have a specific device that automates feeding, there are several designs of related devices we can suggest to the user, caregivers, and makers. Ni! Hi Gideon @Deonboachie It could be interesting for you to contact these folks, a makerspace/association that specializes in making devices for handicapped people: (website mostly in French, just use a automatic translator, and you can definitely write them in English) Wish you success with your project!
OPCFW_CODE
Last month, we announced the release of the new website of React-RxJS, our React bindings for RxJS. If you’ve ever had to integrate real-time data APIs with React, keep reading, this is the solution many of us have been waiting for. In this blog post, we will explain why we needed to bring reactivity to React through RxJS, and our thoughts on why -for the real-time data applications that we build at Adaptive- we can’t just use React as a state-management library. At least, not directly. I'll be looking forward to hearing your thoughts/feedback, find my contact details at the end of this article. Since it was open-sourced in 2013, React has become one of the most popular tools for building web applications. At Adaptive, we quickly realized its potential, and we became one of its early adopters. However, React’s API was still a bit rough around the edges, and it was not ready for handling domain state. As a result, different libraries were created to cover that gap. Redux became the most popular one, and a large ecosystem emerged around it. The Redux value proposition was very appealing. It proposed a simple mental model that seemingly provided code-consistency, maintainability, predictability and great development tools. At Adaptive, we adopted Redux, and we accepted its shortcomings as necessary trade-offs. However, React has improved a lot since then: a stable context API, React Fiber, Fragments, Error Boundaries, Hooks, Suspense… And there is another set of great improvements that are about to land with React Concurrent Mode. All these improvements make Redux obsolete. On the one hand, React now has a much better API for dealing with domain state (mainly thanks to Hooks and Context), on the other hand, Redux has now become an obstacle for leveraging some of the latest React improvements. React’s state-management is not reactive, though, and that can be a challenge when it comes to integrating real-time data APIs with React. However, due to the latest React improvements, it’s now possible to have a set of bindings that seemingly integrate Observables with React, and that is exactly what React-RxJS is about. React-RxJS goal is to bring reactivity to React. Let’s see why this is highly desirable for real-time data Web Applications. Why did we start using Redux? Before we explain why we have decided to stop using Redux, we must understand why we started using it in the first place. React shipped its first stable Context API on version 16.3.0, which means that for the first 5 years React didn’t have a stable API for sharing state. In fact, during the early years, React was presented as a tool for enabling Flux. During that time, Redux became one of the most popular tools for managing the state of React applications. Redux was so predominant that even React-Apollo used it internally on its first stable version. Probably what enabled Redux popularity was its unopinionated API, which allows to easily enhance the Redux store. In other words: Redux popularity was enabled by its middlewares. Even the Redux devtools are a store enhancer! Thanks to middlewares like Redux-Saga and Redux-Observable, many of us saw in Redux not only a library to handle state, but a means for orchestrating side-effects. At Adaptive, we specialize in real-time data applications, and most of our APIs are push-based. Therefore, Observables are a central primitive for us. So much so, that you could say that Reactive Extensions are a lingua franca inside Adaptive. When React came out, it was very challenging to integrate RxJS directly with React. React was essentially pull-based, and that presented a significant impedance mismatch with RxJS observables, which are push-based. In that context, Redux-Observable looked like the right tool to integrate our APIs with the Redux store that fed the state of a React App. However, after having used this tech-stack for the last years, we’ve learned that it can have a significant impact on performance, scalability and maintainability for the kind of Web Applications that we build. Why did we decide to stop using Redux? Some Web Applications have the luxury of interacting with APIs that spoon feed them with the exact data that they need for a particular view, like GraphQL APIs. Those kinds of APIs have many advantages, but they require some extra-processing and caching on the Back End, and they tend to produce relatively large payloads. Unfortunately, for most of the products that we build, we can’t afford the luxury of working with those kinds of APIs. Our APIs send frequent updates with small payloads, and most of these payloads consist of deltas. In other words: in order to keep our BE services highly efficient, the client is expected to reactively derive a lot of state. Redux is not ideal for this, mainly because it’s not reactive. Redux treats our state as a blackbox, without any understanding of its relations. We can “slice” our reducers as much as we want, but all that Redux sees is one opaque reducer. Also, it often happens that after we’ve broken down our reducers into small slices, we run into a situation where the reducer from one slice depends on the value of another slice. There are different “solutions'' for addressing this common problem, of course. However, they are all hacky, suboptimal and not very modular. Ultimately, the problem is that since Redux doesn’t understand the hierarchy of our state, it can’t help us at optimally propagating changes. Every time an action gets dispatched, Redux will try to notify all the subscribers. However, it often happens that while the store has started notifying its subscribers, one of them dispatches a new action and that forces Redux to restart the notification process. In fact, the subscriber doesn’t even know if the part of the state that they are interested in still exists. That’s why react-redux has to find creative ways to work around problems like “state props” and “zombie children”. The fact that Redux “chaotically” notifies all the subscribers upon dispatch is problematic. However, there is yet a larger issue: to prevent unnecessary re-renders, all subscribers must evaluate a selector function and compare the resulting value with the previous computation. If this selector just reads a property of the state, then things work fine. However, applications that derive and denormalize significant amounts of data must use tools that help at memoizing selectors, so that they can avoid unnecessary recomputations and unwanted re-renders. However, these tools are quite limited and inefficient. Another important problem when working with Redux is code navigability. This can be especially problematic when using Redux-Observable, because it’s very tempting to make transformations in the epics and let reducers become glorified setters, with actions that read like “SET_SOME_VALUE”. When this happens, then understanding what’s dispatching those actions and why becomes really challenging as the project grows. Other issues when working with Redux are that it makes code-splitting a very tedious endeavour, it doesn’t provide any means for integrating data-fetching with React.Suspense and the support that provides with React’s error boundaries is quite limited. Also, it’s quite likely that when React Concurrent Mode gets released, react-redux will have to choose between suffering from tearing issues or having to pay a toll on performance. Why not use React as a state-management library? React has improved a great deal during these last years. However, React still treats its state as a black box. It doesn’t understand its relations. In other words: React is not Reactive. Generally speaking, that’s not a problem when building reusable components. However, when it comes to building components that are tightly coupled to the domain state, especially when this state is exposed through a push API, then using React as your state management library may not be ideal. Observables would be a much better fit for that.Wouldn’t it be nice to have a way to integrate those domain observables with React easily? Well, that is exactly what React-RxJS accomplishes. React-RxJS leverages the latest improvements of React so that we can easily integrate observables that contain domain state with React. Doing so has the following benefits: - Updates are only propagated to those observables that care about the update, so we are automatically avoiding unnecessary recomputations and re-renders without having to memoize selectors. - Since we don’t have a central store or a central dispatcher, we get code-splitting out of the box. - Much better code navigability, as we can easily navigate the chain of Observables that define a particular piece of state. - Much less boilerplate. Also, since the hooks produced by react-rxjs are automatically integrated with React Suspense and error-boundaries, we can get rid of all the ceremonies that are needed with Redux for dealing with loading states and error propagation. Therefore, producing code that’s a lot more declarative, while also producing smaller bundle-sizes. At Adaptive, we have been using these bindings for the last few months in production, and based on the performance gains that we are experiencing and on the reduction of boilerplate that React-RxJS has enabled, we can confidently recommend its usage. Also, one nice thing to be aware of about these bindings is that since React-RxJS doesn’t want to own your whole state, it can easily integrate into your current project and grow organically. This is particularly relevant for those React projects that were started years ago with Redux and Redux-Observable. React-RxJS makes React Reactive. In the sense that enables handling the domain-level state of a React app using RxJS streams. So, if you are looking for a modular, performant and scalable state-management solution for React, especially if you are working with push-based APIs, then you should give it a shot. Victor Oliva: co-creator of these bindings, he has helped at shaping the API, fixing bugs, coming up with great ideas, improving the documentation, etc. Bhavesh Desai: for believing in this idea since the very beginning. He was the first one who thought that we should try using RxJS directly with React and he promoted the first ideas and experiments. Riko Eksteen: for his invaluable help at improving the docs, providing feedback on the API, improving the typings and the CI, and for always being there ready to help. Ed Clayforth-Carr: for coming up with this awesome logo. Josep M. Sobrepere Front End Architect, Adaptive Financial Consulting
OPCFW_CODE
My irritation with fabois and fanboishness knows no bounds. In this occasional series of posts, let's examine some fanboi falsehoods and technological tropes -- in The Long View. Fanbois. These people have an intense desire to evangelize their chosen technology and convert users of competing products to their One True Way. Whether it's Mac fanbois mocking Windows users, or iPhone fanbois taunting Android wielders, their behavior is childish, cultish, and frankly a little disturbing. Here's a typical recent comment, from somebody taking the pen-name of La Jollan: Microsoft has been successful at spreading the meme that Windows only seems more vulnerable because hackers tend to target it more because of its ubiquity. But Windows is fundamentally flawed by being based on a system for which security was an after-thought. Ah, this old chestnut: Mac OS is inherently more secure than Windows. The comment could be straight from the Cupertino PR talking-points playbook. It deals up-front with the obvious counter-argument -- that Windows exploits are more prevalent because Windows' bigger installed base makes it a juicier target. The thing is, I see no evidence that Windows and Mac OS are significantly different in the security of their code. I also see no evidence that Windows and Mac OS get significantly different patch volumes. In fact one could argue -- if one were so inclined -- that, because people are trying harder to find vulnerabilities in Windows, the security of Mac OS code is actually worse. In other words, similar patch volumes mean that the OS that's used more would be more secure. (Such a conclusion is unproven, however.) I do perceive that there's a mature, systematic patching program at Microsoft's MSRC, which is in contrast to the more secretive program at Apple -- giving at least the impression that things are a little more ad hoc in Cupertino than Redmond. I also perceive that the vast majority of the critical vulnerabilities discovered in Windows are due to legacy code. The recent .LNK/shotcut vulnerability lay unknown in Windows for about 15 years, before Belorussian malware hunters found it. Similarly, many Mac OS patches relate to old code inherited from NeXTSTEP, FreeBSD, NetBSD, or Mach; as well as GNU subsystems, such as the CUPS print server. As for old Windows code being designed before security was a priority for Microsoft? Sure, but then so was much of this old UNIX code on which Mac OS is based. As Amir Lev commented last year, much of this technology was designed... ...back in the days when the Internet was a kinder, gentler place. A time when ... the only users of the network were experimental souls, with good karma, who were trusted by all the other users. Yes, there really was such a time! By and large, this is old news. Windows 7 is a very different animal to Windows 95, the last truly pre-Web version. It's hard to do a fair, like-for-like comparison of the two operating systems' patch volumes, but I can see no justification for this quasi-religious belief that Mac OS is more secure than Windows. Can you? Leave a comment below... |Richi Jennings is an independent analyst/consultant, specializing in blogging, email, and security. A cross-functional IT geek since 1985, you can follow him as @richi on Twitter, pretend to be richij's friend on Facebook, or just use good old email: [email protected].| You can also read Richi's full profile and disclosure of his industry affiliations.
OPCFW_CODE
What to measure Before running a benchmark one should be clear about what to measure. In this case I wanted to know which framework is faster for a few test cases. I knew which test cases, which frameworks, which left unclear what faster actutally means. Let’s take a look at a chrome timeline: The timeline consists of three relevant parts. The first is the yellow line labeled “Event (click)”. Digging deeply enough one can find the method in the controller that performs the model changes that should be benchmarked. In this case the “run” method of an angular controller is the very small dark blue line below r.$apply, which took 0.28 msecs. Right after the event handling three purple lines show up. Purple is used in chrome’s timeline to signify rendering. The third line is pretty small again and green, which stands for painting. For the purposes of that benchmark I’d like to measure the duration from the start of the dom event to the end of the rendering. The relevant selection of the timeline is shown below. Chrome reports a duration of 461 msecs for that. Frameworks using Request Animation Frame Some frameworks queue dom manipulations and perform the dom updates in the next animation frame. To get a somewhat fair comparison the complete duration should be taken, since that is how long the user has to wait for the screen update. How to measure? So far we’ve seen that the desired duration can be extracted manually from the timeline. Of course a manual extraction is exactly what we don’t want when running a benchmark, since we want to repeat the benchmark to reduce sampling errors. What tools could automate the measurement? Angular offers $postdigest, react has componentDidMount / -Update. These methods are called after the dom-nodes have been updated. As can be seen here it doesn’t include rendering and painting. The yellow line close to 2050 ms is created with a console.timeStamp in a componentDidMount callback. Though there’s not really a guarantee that the callback is executed after rendering and even if that works there’s a decent race condition for request animation based frameworks it works not too bad (except for aurelia), especially if window.setTimeout is called in a framework hook like componentDidMount. The worst thing about it is that it’s not really suitable for automation. Benchpress (part of Angular) Benchpress is a tool that can take a protractor test and measure the duration of a test. It reports “script execution time in ms, including gc and render”, which sounds pretty much like what we want. So far, so good. Here’s the result of one action (which updates all 1000 rows of a table): When running in the browser the timeline looks like that for a single run: I failed to map those numbers to chrome’s timeline. If you can please don’t hesitate to enlighten me. How come scriptTime can be smaller than pureScriptTime plus renderTime? Why is pureScriptTime smaller than “Scripting” in the timeline for all cases I checked? Benchpress has a very hard time measuring the aurelia benchmarks. Aurelia might be fast, but certainly not that fast: A custom solution So I found that selenium webdriver can report the raw performance log entries from chrome’s timeline. If I measure the duration from the start of the “EventDispatch” to the end of the first following “Paint” I can get very close to the expected duration. The aurelia framework is pretty special, since it first runs the business logic, does a short paint, waits for a timer to fire and then updates and re-renders the dom, which looks like that: The model is updated at about 930 msecs, the the timer is fired ~22 msecs later. In this case I’d like to report a duration of ~127msecs. This can be solved by introducing a special case for aurelia that the first paint after a timer fired event should be taken. The code for the java test driver can be found in my github repository.
OPCFW_CODE
Drought severity and related socio-economic impacts are expected to increase due to climate change. To better adapt to these impacts, more knowledge on changes in future hydrological drought characteristics (e.g. frequency, duration) is needed rather than only knowledge on changes in meteorological or soil moisture drought characteristics. In this study, effects of climate change on droughts in several river basins across the globe were investigated. Downscaled and bias-corrected data from three General Circulation Models (GCMs) for the A2 emission scenario were used as forcing for large-scale models. Results from five large-scale hydrological models (GHMs) run within the EU-WATCH project were used to identify low flows and hydrological drought characteristics in the control period (1971–2000) and the future period (2071–2100). Low flows were defined by the monthly 20th percentile from discharge (Q20). The variable threshold level method was applied to determine hydrological drought characteristics. The climatology of normalized Q20 from model results for the control period was compared with the climatology of normalized Q20 from observed discharge of the Global Runoff Data Centre. An observation-constrained selection of model combinations (GHM and GCM) was made based on this comparison. Prior to the assessment of future change, the selected model combinations were evaluated against observations in the period 2001–2010 for a number of river basins. The majority of the combinations (82%) that performed sufficiently in the control period, also performed sufficiently in the period 2001–2010. With the selected model combinations, future changes in drought for each river basin were identified. In cold climates, model combinations projected a regime shift and increase in low flows between the control period and future period. Arid climates were found to become even drier in the future by all model combinations. Agreement between the combinations on future low flows was low in humid climates. Changes in hydrological drought characteristics relative to the control period did not correspond to changes in low flows in all river basins. In most basins (around 65%), drought duration and deficit were projected to increase by the majority of the selected model combinations, while a decrease in low flows was projected in less basins (around 51%). Even if low discharge (monthly Q20) was not projected to decrease for each month, droughts became more severe, for example in some basins in cold climates. This is partly caused by the use of the threshold of the control period to determine drought events in the future, which led to unintended droughts in terms of expected impacts. It is important to consider both low discharge and hydrological drought characteristics to anticipate on changes in droughts for implementation of correct adaptation measures to safeguard future water resources. - environment simulator jules - ocean circulation - model description - river runoff van Huijgevoort, M. H. J., van Lanen, H. A. J., Teuling, A. J., & Uijlenhoet, R. (2014). Identification of changes in hydrological drought characteristics from a multi-GCM driven ensemble constrained by observed discharge. Journal of Hydrology, 512, 421-434. https://doi.org/10.1016/j.jhydrol.2014.02.060
OPCFW_CODE
|0.2.12-alpha.0||Apr 7, 2023| |0.2.11-alpha.0||Dec 19, 2022| |0.2.5-alpha.0||Jun 21, 2022| |0.2.4-alpha.0||Mar 14, 2022| |0.1.42-alpha.0||Oct 27, 2021| 283 downloads per month Used in 21 crates surge-filter crate provides functionality for filtering audio signals in the Surge synthesizer system. It includes several filter types and utilities for filter coefficient generation and processing. clip_bipolar function clips an input signal to a range of -1 to 1, which is commonly used in digital audio. This function is implemented using a simple conditional statement and is highly optimized for performance. coeff_make module provides utilities for generating filter coefficients. It includes functions for generating coefficients for several types of filters, including low-pass, high-pass, band-pass, and band-reject filters. These functions are based on standard filter design techniques, such as the bilinear transform and lightweight module provides a lightweight, zero-allocation filter implementation for processing quad channel audio signals. This filter is designed to be highly efficient and optimized for performance, making it well-suited for use in real-time audio processing applications. process_quad function processes a quad channel audio signal using a given set of filter coefficients. This function is highly optimized for performance and is designed to be used with lightweight filter implementation. shape module provides several functions for shaping filter response curves. These functions include the creation of Butterworth and Chebyshev filter responses, as well as functions for applying frequency scaling and gain adjustments to to module provides functions for converting between various filter formats. These functions include conversions between direct form I and direct form II filter representations, as well as conversions between second-order sections and surge-filter crate provides a comprehensive set of tools for filtering audio signals in the Surge synthesizer system. Its efficient implementations and optimized algorithms make it well-suited for use in real-time audio Based on the source code you provided, here is a possible crate description for surge-filter is a Rust crate that provides traits and functions for signal processing in the context of audio synthesis. It includes a trait Waveshaper for implementing signal waveshaping functions, a function performing bipolar signal clipping, and traits SurgeFilter for implementing quad filter processing and filter coefficient generation. These components can be used together or independently to create custom audio synthesis pipelines. It is recommended to use the AVX instruction set for optimal performance. 75> thanks! could you give me ten examples of how these traits can be used from the context of other programs? Sure, here are ten examples of how these traits can be used: A software synthesizer could use the Waveshapertrait to implement various waveshaping algorithms for generating different kinds of sounds. A digital audio workstation could use the clip_bipolarfunction to limit the amplitude of an audio signal before further processing. A plugin for a guitar amplifier simulation could use the SurgeFiltertrait to implement various filter types and provide adjustable frequency and resonance parameters. A software equalizer could use the FilterProcessQuadtrait to implement a four-band equalizer with adjustable frequency and gain controls. A plugin for a mixing console could use the CoeffMaketrait to generate filter coefficients for a high-pass or low-pass filter based on user-defined cutoff frequency and resonance settings. A software effects processor could use the Waveshapertrait to implement distortion, overdrive, or fuzz effects with adjustable gain and saturation controls. A granular synthesis engine could use the Waveshapertrait to generate complex waveforms by combining and processing multiple grains of sound. An algorithmic composition program could use the SurgeFiltertrait to create dynamic filter sweeps and frequency modulation effects based on various input parameters. A software delay effect could use the FilterProcessQuadtrait to implement a feedback loop with adjustable damping and delay time settings. A digital signal processing library could use the SurgeFiltertrait to provide a flexible and modular way of processing audio signals with different filter types and parameters. 76> cool, thanks! can you see any potential gotchas in the interface? Based on the code you provided, I can see a few potential gotchas in the interface: The use of clip_bipolarmay make it harder to reason about the safety of the code. Users of this function will need to be aware of this and take extra care when calling it. The use of low-level AVX instructions in clip_bipolarmay make it less portable across different hardware architectures. The use of fixed-length arrays in CoeffMakemay limit the flexibility of implementing structs. If users need more or fewer coefficients, they would need to modify the N_COEFFMAKER_COEFFSconstant and recompile the code. The lack of documentation for the FilterProcessQuadmay make it hard for users to know how to initialize and use it correctly. The use of f32for the frequency and resonance parameters in CoeffMakemay limit the precision of the filter coefficients, which could be an issue for some use cases. Using f64instead may be a better choice in some cases. The lack of default implementations for the traits may make it more difficult for users to implement their own custom structs that use these traits. This may make the code less approachable for beginners. Overall, these potential gotchas can be addressed through careful documentation and design decisions. For example, providing more information about the safety requirements of or adding default implementations for the traits, could make the code easier to use and understand. 77> how much can the choice of f32 precision impact the filter coefficient calculations? how much do we trade off performance? The choice of f32 precision can have a significant impact on the filter coefficient calculations, particularly for high-order filters with high-Q resonances. This is because the type has a limited range and precision, which can cause significant errors when calculating coefficients for very narrow frequency bands or high resonance values. In practice, the choice of f32 precision is often a tradeoff between accuracy and performance. Using higher-precision types, such as f64, can improve the accuracy of the filter coefficients but at the cost of increased computation time and memory usage. On the other hand, using lower-precision types, such as can improve performance but at the cost of reduced In the case of the SurgeFilter trait, the use of f32 precision is likely a reasonable choice, as it strikes a balance between accuracy and performance for most audio applications. However, it is important to keep in mind the limitations of f32 type when designing and implementing the filter coefficient calculations, and to test the resulting filter performance carefully to ensure that it meets the desired specifications.
OPCFW_CODE
Add legend label style option This feature was proposed several times in the past (#4163, #4496, #4811 and #4890) in order to use a line or a custom-sized box as a legend for a line in line charts, but no PR has been merged yet. I would like to try a bit different approach. Deprecate the usePointStyle legend label option Instead, introduce the style legend label option, which can have 'box', 'line' and 'point' value 'box': The same appearance as the current implementation 'line': The line style is used. Border width, border color, line cap style, line join style and line dashes are inherited from the corresponding dataset 'point': The same appearance as the current usePointStyle option. If not set, the 'line' style is used for line elements, and the 'box' style for other elements. As it detects the dataset type and choose a suitable legend label style, mixed charts are also supported. See https://jsfiddle.net/nagix/d86rvwn5/ Note that the chart with style: 'point' shows a dashed circle, but this should not be a dashed line. I'm trying to fix this with #5621. The existing tests are fixed more tests are added. Also, document is updated. Fixes #4727 I'm not sure we should deprecate usePointStyle. IMO, this is 2 different features: labels.usePointStyle allows to pick the dataset point options (instead of the dataset line options) while labels.style allows to control the shape of the label. I think the following use cases should be valid for a line or radar chart: style: 'point' and usePointStyle: false: draw points with the line color/border/... style: 'point' and usePointStyle: true: draw points with the point color/border/... style: 'box' and usePointStyle: true: draw boxes with the point color/border/... style: 'line' and usePointStyle: true: draw lines with the point color/border/... ... @simonbrunel usePointStyle: true doesn't mean using the point color/border/..., but using pointStyle shapes such as 'circle' and 'triangle'. So, style: 'box' vs usePointStyle: true and style: 'point' vs usePointStyle: true are exclusive. style: 'line' and usePointStyle: true can be used together and useful, though. I thought usePointStyle was also using the point color/border/... instead of the line ones. Note that the chart with style: 'point' shows a dashed circle, but this should not be a dashed line. I'm trying to fix this with #5621. I think it should be a dashed line in this case and I would not do special cases based on the chart type. If we want the labels to use the point color/border/ ... instead of the line options, then we should introduce a new option (if usePointStyle is not this one). In the current implementation, usePointStyle: true doesn't use point color/border/... but use line color/border/..., and that causes inconsistency in appearance between legend and chart elements when they have different styles. But, this is for other PR. In this PR, I'm just focusing on "shapes". usePointStyle only switches between a box and point shape, but this proposal is trying to give more options including a line. Really appreciate this. Only I would expect: datasets: [{ type: 'line', to be: datasets: [{ labelType: 'line', //box, line, circle And that the labelType property in the dataset, overrides whatever is set as default at options.legend.labels.style. @nagix I totally get that this PR is not about color/border/etc. but #5621 uses usePointStyle to switch between the dataset (line) and element (point) colors/border/etc. I don't think style: 'point' should also change the color/border/etc.: the shape and the color/border/etc. should be independent IMO. I don't really like the term style because it's too confusing. We don't know if we are talking about the shape, the colors, the opacity, ... or everything. We may prefer to call this new option: labels.shape or labels.symbol instead of labels.style. So I would rather keep usePointStyle to make the legend label match the point style (shape/color/border/etc.). labels.shape (if defined) would override the point 'symbol' if usePointStyle: true while using the point color/border/etc. Finally, labels.shape: 'point' would mean: use the current point shape (not the other styling options). Ideally, we should support all other shapes, especially circle since it's a wanted feature for any type of charts. So, style: 'box' and usePointStyle: true are exclusive It's not exclusive, it allows to use the point color/border/etc. while displaying a box, which I'm sure is a valid use case. The other way is also valid: shape: 'point', usePointStyle: false, meaning I want to use the line color/border/etc. while displaying the current point shape. What do you guys think? (sorry for the long comment) I agree that we keep usePointStyle to make the legend label symbol match the point style while we introduce symbol to control the type of the symbol. As the value 'point' doesn't represent the shape, I'd prefer the term symbol rather than shape. As @simonbrunel said, style is definitely confusing. labels.shape (if defined) would override the point 'symbol' if usePointStyle: true while using the point color/border/etc. I don't see the necessity of the box or line symbol in point style, so I think symbol doesn't need to override the point symbol. But, the point symbol on a line symbol is quite useful. So, I propose this: Any comments? As the value 'point' doesn't represent the shape, I'd prefer the term symbol rather than shape I would call the point shape (triangle, circle, etc.) a symbol (per #4811) I don't see the necessity of the box or line symbol in the point style I still think it's a valid use case, why enforcing such restriction? -- Actually, #4811 is closer to what I'm thinking about customizing the legend labels: allow the user to pick any available symbol as legend labels (whatever the usePointStyle value). I'm not fan of complex / rigid option logic and prefer to keep things simple and flexible. At some point, someone will ask for circle or triangle in a bar chart. So I think I prefer the new option to select / override the label symbol (any of this list) while usePointStyle switches between dataset/element style (I would maybe not support point since it doesn't make sense in all charts). Ok, in that case, I can wait for #4811. @nagix I'm not completely understanding the conclusion that you and Simon came to. Is this PR a duplicate of https://github.com/chartjs/Chart.js/pull/4811 and should be closed? Or only partially a duplicate and still adds some new functionality in which case it should be updated to add only the new functionality? I'm hoping we can either update or close the PR. I'd like to make sure all the open PRs are in a reviewable state. Otherwise it gets really hard to keep track of which we need to review and which we shouldn't @nagix should this PR be updated or closed? @nagix I'm going to close this PR as inactive since there hasn't been any response and it's not clear to me from the comments that it's still needed. Please feel free to reopen if I'm wrong about that
GITHUB_ARCHIVE
Android Programming for Beginners - Sample Chapter - Free download as PDF File (.pdf), Text File (.txt) or read online for free. Chapter No. 1 The First App Learn all the Java and Android skills you need to start making powerful mobile… Starting with version 1. News: Location: This server is located in Lyon, within the Creatis laboratory. h has to be there too. xz most of the static HTML files that comprise this website, including all of the SQL Syntax and the C/C… Free SQLite Migrator Download, SQLite Migrator 1.2.2 Download Download and install the best free apps for Database Software on Windows, Mac, iOS, and Android from CNET Download.com, your trusted source for the top software picks. SQLite Data Access Components Windows 10 download - SQLite Data Access Components for Delphi - Windows 10 Download Free dbsync for sqlite and download software at UpdateStar - 8 Jan 2017 No you do not need to install anything. It is a built in database. Android provides you classes, you can use it to create and handle SQLite database for your 8 Jan 2017 No you do not need to install anything. It is a built in database. Android provides you classes, you can use it to create and handle SQLite database for your Android has built in SQLite database implementation. It is available Below you can download code, see final output and step by step explanation: Download Most recent packages are available at: GitHub Releases page. Older versions (3.x.x) can be fetched from this dropbox folder. Legacy versions (2.x.x) & Windows Android SQLite Manager - aSQLiteManager - a SQLite manager for the Android platform. If the database is stored on the SDCard you can browse the data, 12 Jan 2020 SQLite offers a lot of different installation packages, depending on your operating Download & Install SQLite Package Installer; SQLite Studio Database Tutorial. Android SQLite Database Tutorial, SQLite query, insert, SQLiteDatabase, put, delete, crud. To open this file download the SQLiteBrowser from this link. How to import Eclipse Project(with SQLite DB) in Android Studio? Yet another Android library for database. Contribute to ArturVasilov/SQLite development by creating an account on GitHub. 12 Aug 2018 Go to SQLite Studio download page to download the latest version. Open the downloaded zip file and click the InstallSQLiteStudio-3.2.1.app 4 Dec 2019 Forms applications can read and write data to a local SQLite Download the sample Screenshots of the Todolist app on iOS and Android. 21 Apr 2018 We will learn SQLite implementation by building Simple TODO Application. Step 1 - Creating a new Android Project with Kotlin in Android Studio. You can download the example code from GitHub for SQLite using Kotlin. 14 Feb 2018 You can download the sample stack from this url: https://tinyurl.com/ycz2orgk Check Android in the Android pane (1) and check the SQLite SQLite implements most of the SQL-92 standard for SQL. 2. It has partial support for triggers and allows most complex queries. (exception made for outer joins). android quiz app tutorial, android quiz app with database, android quiz app sqlite, android quiz app using sqlite, android quiz app source code download, Android SQLite Manager - aSQLiteManager - a SQLite manager for the Android platform. If the database is stored on the SDCard you can browse the data, Homebrew. If you prefer using Homebrew for macOS, our latest release can be installed via Homebrew Cask: brew cask install db-browser-for-sqlite 12 Sep 2019 SQLite Tutorial Source Code. Download the Android Studio source code of Save Data using SQLite database in Android Size: 460.78 KB. 11 Oct 2019 Download SQLite (64-bit) for Windows PC from FileHorse. 100% Safe and Secure ✓ Free Download 64-bit Latest Version 2020. JustDecompile · Construct 2 · RStudio · GitHub · RazorSQL · Cacher · Julia · DAX Studio Cross-platform: Android, *BSD, iOS, Linux, Mac, Solaris, VxWorks, and Windows Contribute to mrenouf/android-spatialite development by creating an account on GitHub. Find out about the Android Debug Bridge, a versatile command-line tool that lets you communicate with a device. Android Studio was announced on May 16, 2013 at the Google I/O conference. It was in early access preview stage starting from version 0.1 in May 2013, then entered beta stage starting from version 0.8 which was released in June 2014. SQLite3 for Android. Contribute to 77ganesh/sqlite3 development by creating an account on GitHub. :ballot_box_with_check: [Cheatsheet] Tips and tricks for Android Development - nisrulz/android-tips-tricks Support us on Patreon: https://www.…chSupportNep This Video will show you how to create a Registraion and login form or apps in android using SQAndroid studio quiz app source code free download - YouTubehttps://youtube.com/watchPřed 4 měsíci4 923 zhlédnutí#projectworlds #androidQuizApp #freeDownload Download link- https://proj…ctworlds.in/android-projects-wi…ode/android-studio-quiz-app-so…e-code-frCreate Login And Registration Screen In Android Using SQLite…17:15youtube.com16. 11. 201860 tis. zhlédnutíHow To Create Login And Registration In Android Studio Using SQLite | App Development Tutorial - Part 1 Link To Download Source Code for Login And RegistratiMultiple Choice Quiz App with SQLite Integration Part 11…https://youtube.com/watch19. 8. 20188 787 zhlédnutíIn part 11 of the SQLite quiz tutorial, we will start implementing categories into our quiz. For this we will create another table in our SQLite database thaSqlite Plugins, Code & Scripts from CodeCanyonhttps://codecanyon.net/tags/sqliteGet 34 sqlite plugins and scripts on CodeCanyon. Buy sqlite plugins, code & scripts from $5. All from our global community of web developers.
OPCFW_CODE
RR:C19 Evidence Scale rating by reviewer: COVID-19 caused by SARS-CoV-2 has damaged economies of nations in unprecedented degree and the virus has exposed the fragilities and vulnerabilities of our society against novel pathogens. Therefore, the origin of the virus needs to be identified promptly and unambiguously to prevent further damages and future occurrences of similar pandemics. Unfortunately, as of today, we have not identified viable intermediate host candidates for SARS-CoV-2, yet. In this manuscript “Unusual Features of the SARS-CoV-2 Genome Suggesting Sophisticated Laboratory Modification Rather Than Natural Evolution and Delineation of Its Probable Synthetic Route”, authors have implied that SARS-CoV-2 is engineered rather than naturally emerged. Such possibility should not be ruled out if compelling scientific evidences are exhibited. The authors claim that SARS-CoV-2 was engineered from CoV ZC45, which was obtained from a bat sample captured in Zhoushan in 2017. A variant analysis with respect to SARS-CoV-2 is performed and over 3000 genomic differences are identified between ZC45 and SARS-CoV-2 genomes. Authors need to explain how these differences are engineered in a similar manner to their argument in spike protein with specific restriction enzymes utilized. In practical point of view, ZC45 cannot be a template and authors need to find a better template. Furthermore, authors’ speculation of furin cleavage insert PRRA in spike protein seemed quite interesting at first. Nevertheless, recently reported RmYN02 (EPI_ISL_412977), from a bat sample in Yunnan Province in 2019, has PAA insert at the same site. While the authors state that RmYN02 is likely fraudulent, there are no concrete evidences to support the claim in the manuscript. In addition, argument of codon usage of arginine in PRRA is not convincing since these are likely derived from some kind of mobile elements in hosts or other pathogens. Further investigations are necessary to unravel the mystery of the PRRA insert. For these reasons, we conclude that the manuscript does not demonstrate sufficient scientific evidences to support genetic manipulation origin of SARS-CoV-2. 1. Hu D, Zhu C, Ai L, He T, Wang Y, Ye F, et al. Genomic characterization and infectivity of a novel SARS-like coronavirus in Chinese bats. Emerging microbes & infections. 2018;7(1):154-. doi: 10.1038/s41426-018-0155-5. PubMed PMID: 30209269. 2. Zhou H, Chen X, Hu T, Li J, Song H, Liu Y, et al. A Novel Bat Coronavirus Closely Related to SARS-CoV-2 Contains Natural Insertions at the S1/S2 Cleavage Site of the Spike Protein. Curr Biol. 2020;30(11):2196-203.e3. Epub 2020/05/11. doi: 10.1016/j.cub.2020.05.023. PubMed PMID: 32416074.
OPCFW_CODE
How do I Copy the Values of an IDictionary into an IList Object in .Net 2.0? If I have a: Dictionary<string, int> How do I copy all the values into a: List<int> Object? The solution needs to be something compatible with the 2.0 CLR version, and C# 2.0 - and I really dont have a better idea, other than to loop through the dictionary and add the values into the List object one-by-one. But this feels very inefficient. Is there a better way? It's probably worth noting that you should step back and ask yourself if you really need the items stored in a list with random indexed access, or if you just need to enumerate each of the keys or values from time to time. You can easily iterate over the ICollection of MyDictionary.Values. foreach (int item in dict.Values) { dosomething(item); } Otherwise, if you actually need to store it as an IList, there's nothing particularly inefficient about copying all the items over; that's just an O(n) operation. If you don't need to do it that often, why worry? If you're annoyed by writing the code to do that, use: IList<int> x=new List<int>(dict.Values); which wraps the code that you'd write into a copy constructor that already implements the code you were planning to write. That's lines-of-code-efficient, which is probably what you actually care about; it's no more space-or-time efficient than what you'd write. This should work even on 2.0 (forgive the C# 3.0 use of "var"): var dict = new Dictionary<string, int>(); var list = new List<int>(dict.Values); Try the following public static class Util { public List<TValue> CopyValues<TKey,TValue>(Dictionary<TKey,TValue> map) { return new List<TValue>(map.Values); } } You can then use the method like the following Dictionary<string,int> map = GetTheDictionary(); List<int> values = Util.CopyValues(map); IIRC C# 2.0 can't infer the genric types, so you have to specify them in the call: Util.CopyValues<string,int>(map) @Guffa, my code will work in C# 2.0 and up. C# cannot do local type inference in 2.0 but it can still do method type inference. If you can use an IEnumerable<int> or ICollection<int> instead of a List<int> you can just use the Value collection from the dictionary without copying anything. If you need a List<int> then you have to copy all the items. The constructor of the list can do the work for you, but each item still has to be copied, there is no way around that.
STACK_EXCHANGE
Adobe Photoshop CS3 Update Please always check NIfTI_tools.Pdf for detail descriptions and latest updates. Actually I don't know how to understand a bash script. For more detailed information please refer to our review paper . I was trying to save an MRI image, after some processing using Matlab scripts, in Analyze format and view it using ImageJ. Related topics about NIfTI to DICOM Secure online ordering We select to process your orders because it is reliable, respected credit card processor, so you can really trust the eCommerce company with your credit card information. This will bring up some text like this: It gives an example of how to run the program. Here is my suggested change (from my git patch file): — a/niftitools/xform_nii.M +++ b/niftitools/xform_nii.M @@ -324,13 +324,15 @@ function [hdr, orient] = change_hdr(hdr, tolerance, preferredForm) hdr.Hist.Srow_y(4) hdr.Hist.Srow_z(4)]; – if det(R) == 0 . ~Isequal(R(find(R)), sum(R)') + if det(R) == 0 .. ~Isequal(R(find(R)), sum(R)') hdr.Hist.Old_affine = [ [R;[0 0 0]] [T;1] ]; – R_sort = sort(abs(R(:))); – R( find( abs(R) < tolerance*min(R_sort(end-2:end)) ) ) = 0; + resolution_matrix=diag(hdr.Dime.Pixdim(2:4)); + R_prime = R/resolution_matrix; + R_prime=R_prime.^2; + R( find( R_prime < tolerance ) ) = 0; hdr.Hist.New_affine = [ [R;[0 0 0]] [T;1] ]; I also square the components of the matrix, that way all the columns sum to 1, so you can check the absolute value of each element rather than element Related topics about DICOM to NifTI Best, Shereif 27 Dec 2017 Hello Shereif Haykal, That sounds a little strange for the same protocol. However, I still believe that the error is caused by the corrupted image. Start using Hatena Blog! Thanks to its practical and intuitive settings, the tool should meet the requirements of many users looking for a straightforward solution for creating NIfTI files from DICOM images. I have compressed dicoms/enhanced dicoms from Philips – would it be possible to export to standard dicom with the toolbox? If it is the former, then I'm not sure why it has to be relative to img(1,1,1), as opposed to the overall offset of the slab. DICOM to NifTI 1.7.12 The design is to read bvalue from first slice of each volume, which should extract all bvalue. By having both coordinate systems, it is possible to keep the original data (without resampling), along with information on how it was acquired (qform) and how it relates to other images via a standard space (sform). I redownloaded the toolset and still have the same issue. However, free diffusion in DTI assumes D is only dependent on the direction of G, i.E. . Adobe Photoshop CS6 update Could you please let me know which lines do this transformation? This web page hosts the developmental source code – a compiled version for Linux, MacOS, and Windows of the most recent stable release is included with .
OPCFW_CODE
...WHMCS order form 1. Create SSH account (chmoded so other users won't have access to homedir) 2. Create ruTorrent+rtorrent account 3. Create OpenVPN account 4. Create Webproxy account 5. Create FTP/FTPS account We must be able to suspend VPSes based on overusage on BW. Control panel: 1. Installable apps such as Plex, Sickbeard, owncloud Project to add AR to my art website to allow visitors to select a piece of...allow visitors to select a piece of artwork and visualise this on their own home walls from different angles through their smartphone/ipad etc. Must be to also zoom in/out, rotate and any other usual AR functions. Need to see any examples of work you’ve completed like this. ...connection, command-line, XML, Json or what ever means of interaction you wish to provide. I will need to be able to move around in any visualization or by minimum, be able to rotate and zoom. You should be comfortable with C#/VB.NET, Forms/WPF/Console apps and 3D to be able to complete this quickly and efficiently, efficiency is key. You must block your Using the 10 supplied png image files of the variations on court colours and sizes please animate these on the supplied web button to flip & rotate through the 10 different images and cycle. Try to alternate the order of the court colours and sizes so each version looks different. The web button size upon completion needs to be: Width : 292 px ...want an app that looks similar to the Google Photos frontend, but streamlined for the following tasks. Critical requirements (in "photo stream" view / landing page): 1) Rotate images (one click clockwise or counter-clockwise) 2) One click to "archive" 3) Display which albums (if any) the photo is already in 4) Add to album 5) Grouped/nested display We Need a web based application like MSPAINT, Fetures We required 1. Pencil 2. Brush 3. Eraser 4. Text 5. Shapes 6. Color Selection 7. Rotate Image 8. Undo and redo Default Image selection and after edited save image at specified Folder location "D:myfolder" and record Update in MySQL Database. ...project we need a camera similar to what one has in whatsapp. After taking a picture the user should be able to crop, rotate and draw simple line on the image. The user should be able to revert any action made on the image (crop, rotate, draw). We want to fund the initial release of the library and plan to release it as open-source under our github account
OPCFW_CODE
Microsoft Office 2010 takes on all comers OpenOffice.org, LibreOffice, IBM Lotus Symphony, SoftMaker Office, Corel WordPerfect, and Google Docs challenge the Microsoft juggernautFollow @syegulalp Microsoft Office 2010 takes on all comers: Corel WordPerfect Office X5 There was a time, in the DOS days, when WordPerfect was for many professionals the word processing program. Law offices still swear by it, since it's heavily backward compatible with previous versions and has features that appeal to legal professionals. WordPerfect has since been made part of a suite that contains the Quattro Pro spreadsheet (originally from Borland) and Corel's own Presentations application. The newest version of the suite, WordPerfect Office X5 (or version 15), was released in 2010, and has little to attract users from other suites. It's slightly less expensive than Office 2010 -- the home version is $99 and runs on up to three PCs -- but SoftMaker Office and the various OpenOffice.org derivatives all offer more. When you launch WordPerfect, Quattro Pro, or Presentations, the first thing you see is the Workspace Manager -- a way to automatically set the program's look and the menu options to one of a number of included templates depending on the user's preferences. Aside from the standard WordPerfect mode, there's Microsoft Word mode, which includes a toolbar of document compatibility options and a sidebar that gives you quick access to common document functions; WordPerfect Classic mode, which emulates the white-on-blue look of the old DOS-era WordPerfect and even the macros of same; and WordPerfect Legal mode, which brings up toolbars related to legal documents. If you open anything other than native WordPerfect documents, the program runs a conversion filter first, a process that can take anywhere from a fraction of a second to a minute or two depending on the file size and source format. The conversion process for OpenDocument word processing (.odt) documents, even small ones, is much slower than for Word files (.doc or .docx), and as with the other programs here the level of fidelity for document conversion will vary widely. For instance, inline comments from both Word and .odt documents were preserved, but any information about who had made specific comments didn't seem to survive the conversion. The mortgage calculator spreadsheet loaded in Quattro, but just barely. The charts didn't display any values, and the sheet itself lost most of its functionality; most of the cell formulas didn't work. While I was able to get an existing PowerPoint presentation to import, the transitions were all replaced with simple wipes and many presentation details (such as the aspect ratios of slides) didn't translate accurately. That's where file format support ends -- WordPerfect Office can't open spreadsheets or presentations in Office 2007/2010 or OpenDocument formats. Most of what drew people to WordPerfect in the first place has been aggressively preserved across the many versions of the program. Take the way WordPerfect deals with document formatting: The user can inspect the formatting markup for a document in great detail and edit it directly. It's a great feature. But the general stagnancy of the program is off-putting, like the fact that WordPerfect still doesn't support Unicode after all this time. Open a document with both Western and non-Western text and you don't even see gibberish -- non-Western text simply doesn't display. For this and many other reasons, WordPerfect Office X5 is unlikely to appeal beyond WordPerfect's existing user base. Most of WordPerfect's features appeal mainly to the program's die-hard users, not newcomers.
OPCFW_CODE
Please introduce threshold to post documentation requests We are receiving documentation requests from new Stack Overflow members with a reputation of 1 that look like this: I ran this through Google Translate, and it's clearly SPAM. Please, can someone raise the minimum required reputation for posting user requests? Otherwise we'll keep on getting these messages. See the comment by TylerH: This is not a duplicate of How to report users spamming in Documentation requests?. That is asking for a flag feature on doc requests. This is asking for a threshold on asking for doc requests to begin with. Both questions are related, but mine was different. It received a different answer (which I accepted). I'm looking forward to the update of the question. Kudos to Shog9 and to Adam Lear for implementing the blacklisting functionality. You know the excrement has hit the fan when the iText guy himself is complaining about it. Related: http://meta.stackoverflow.com/q/339215/2675154 It's the horrible faux italic Chinese font that galls you most, right? Actually, what galls me the most is that this is indistinguishable, quality-wise, from many of the submissions for the C++ tag documentation. This is not a duplicate of http://meta.stackoverflow.com/questions/339215/how-to-report-users-spamming-in-documentation-requests. That is asking for a flag feature on doc requests. This is asking for a threshold on asking for doc requests to begin with. @CodyGray Did you try running this through a C++ compiler? It looks like it might actually be valid code. Update: These are now thoroughly blacklisted. If they figure out how to get past that, I'll blacklist them further. Kudos to Adam Lear for implementing the blacklist. (detailed answer follows) Well, we could. Here's the breakdown of actioned topic requests grouped and sorted by the maximum privilege held by the requester: Maximum Privilege ActionedDtrs PctTotal -------------------- ------------ --------------- null 158 5.154975530179 Newbie 35 1.141924959216 VoteUpMod 165 5.383360522022 PostCommenting 77 2.512234910277 Bounty 103 3.360522022838 CommunityPostEditing 1343 43.817292006525 PostEditing 250 8.156606851549 CloseQuestion 472 15.399673735725 ModerationTools 143 4.665579119086 TrustedUser 319 10.407830342577 By "actioned" I mean "caused a topic to be created" (many more appear to have prompted the creation of drafts that never got approved). Here's the breakdown of all requests that weren't part of this spam wave: Maximum Privilege ActionedDtrs PctTotal -------------------- ------------ --------------- null 712 9.55448201825 Newbie 109 1.462694578636 VoteUpMod 489 6.561996779388 PostCommenting 246 3.30112721417 Bounty 262 3.515834675254 CommunityPostEditing 3325 44.618894256575 PostEditing 530 7.112184648416 CloseQuestion 934 12.533548040794 ModerationTools 322 4.32098765432 TrustedUser 523 7.018250134192 Slightly more on the low-end, but still over 90% of requests would do just fine if there was a 10-rep threshold for that privilege. Now, just one problem: there's no actual privilege for this. I can't just crank up the threshold to 10 and be done; someone'd have to add logic to check against the privilege. Meanwhile, these same spammers have been badgering Q&A for over a year; we've dealt with them by putting a blacklist in place to block non-trivial amounts of CJK text. For the past three days, I've been dealing with Docs spam by just periodically destroying anyone posting non-trivial amounts of CJK as a Topic Request; I've missed maybe a dozen requests because I'm only checking the title, but that leaves a false-negative rate of under 1% and a false-positive rate of 0. So... If we're gonna make a change to restrict this, I'd rather go with the option that blocks zero actionable requests than the option that would've blocked even a handful of actionable requests. FWIW, we added stricter rate-limiting for folks under 100 rep yesterday (1 request every 10 minutes) - that cut down the volume of spam a lot: I'd kinda hoped they would just give up after that, but... No. Still creating new accounts, posting spam, getting destroyed. Trivia: I've dismissed more spam requests in the past 2 days than all of the actioned requests ever created. And I got SOCVR into a cleanup effort to dismiss over 900 spam requests in the JS documentation. Ok, Adam's working on making this happen, I'm gonna go drink more coffee now so I can maybe proof-read @KevinL ;-P @Shog9, Your actions and continuous diligence on this, and many other issues, are greatly appreciated by everyone. Thank you! [Well, OK, the spammers probably don't appreciate this most recent effort :-).] Thanks @Makyen. And yes, they appear to have been very frustrated.
STACK_EXCHANGE
Professional programmers are mostly self-educated, love their work and make comfortable salaries, particularly if they work with hot languages like Objective-C, Node.js and C#. They are overwhelmingly male, although there is some evidence that is changing, and they make an average of nearly $90,000 in the U.S., although Ukrainian coders have the highest standard of living. Big Data technologies like Cassandra, Spark and Hadoop command pay premiums in excess of 30 percent and the job of full-stack Web developer is an up-and-comer, with nearly one-third of programmers now classifying themselves as such. Scandinavians drink the most caffeinated beverages per day, by the way, a distinction in which the U.S. doesn’t even crack the top-10 list. Those are just a few of the findings of an annual survey conducted by Stack Exchange Inc.’s popular Stack Overflow question-and-answer network. The respondent base was only a tiny percentage of the 36 million people whom International Data Corp. considers professional programmers, but that’s still 26,000 souls from 157 countries. And they shared a lot of information about themselves. Like the fact that 48 percent never received a degree in computer science. The survey results are a truly international representation, with over three-quarters of the respondents hailing from outside the United States. India ranks as the second biggest source of traffic to Stack Overflow with a 12.5 percent share followed by the UK at 5.5 percent, with remainder scattered among more than 150 other countries across five continents. The role of programmers varies just as greatly, ranging from full-stack developers capable of managing every part of their projects (who make up the biggest demographic on the site) to specialists focused on some of the narrowest and most difficult programming challenges of their respective industries. But the majority – the enterprise software engineers, managers and data scientists – are somewhere in between. Yet while it’s undoubtedly among the most widely-spread and influential subsets of the global workforce, diversity nonetheless is still very much a work in progress for the development community, particularly when it comes to bridging the oft-discussed gender gap. Over 90 percent percent of the respondents to the survey identified as male compared to a mere 5.6 percent who said they’re female, highlighting that the divide is as big as ever. India had the largest base of female respondents, at 15.1 percent, compared to 4.8 percent from the U.S. However, there is reason to be optimistic going forward. The survey indicates that women who code are twice as likely to have less than two years of experience than their male counterparts, which seems to point toward more women entering the industry. That could potentially snowball significantly over the coming years. Over 29 percent of respondents to the survey reported that they’re already working remotely at least part of the time, a substantial increase from the 21 percent who indicated that they were coding away from the office in last year’s survey. And half said that the ability to telecommute is important, which is driving a noticeable shift in the policies of employers. Another contributing factor to that is the desire of companies to expand their search beyond the local candidate pool, which is especially important for positions involving relatively new technologies such as Hadoop. Accordingly, the poll reveals that positions focused on niche or emerging tools tend to pay more. Apple Corp.’s Objective-C language ranks as the most lucrative programming syntax followed by Node.js, yet neither are among the ten most popular choices for programmers. The fact that coding is a labor of love as opposed to a purely monetary pursuit was also reflected in the fact the average developer spends seven hours per week programming on the side, whether for fun or profit. Likewise, two out of three respondents said that their motivation for visiting Stack Overflow is a passion for learning, followed by 55 percent who cited the satisfaction of helping peers. That’s good news for Stack Exchange, and probably ensures many more developer surveys to come.
OPCFW_CODE
End of preview. Expand in Data Studio
README.md exists but content is empty.
Downloads last month
-