Document
stringlengths
395
24.5k
Source
stringclasses
6 values
package dev.deyve.algorithmsjava.sorting; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import java.util.Arrays; /** * Quick Sort */ public class QuickSort { private static final Logger logger = LoggerFactory.getLogger(QuickSort.class); public static Integer[] sort(Integer[] array) { return sort(array, 0, array.length - 1); } private static Integer[] sort(Integer[] array, Integer start, Integer end) { if (start >= end) { return array; } var boundary = partition(array, start, end); sort(array, start, boundary - 1); sort(array, boundary + 1, end); return array; } private static Integer partition(Integer[] array, Integer start, Integer end) { var pivot = array[end]; var boundary = start - 1; for (var index = start; index <= end; index++) { if (array[index] <= pivot) { swap(array, index, ++boundary); } } return boundary; } private static void swap(Integer[] array, Integer firstIndex, Integer secondIndex) { var temporaryVariable = array[firstIndex]; array[firstIndex] = array[secondIndex]; array[secondIndex] = temporaryVariable; logger.info("Swap: {}", Arrays.toString(array)); } }
STACK_EDU
[07:37] <brobostigon> morning boys and girls. [08:20] <zmoylan-pi> o/ [08:26] <brobostigon> o/ [09:43] <knightwise> hey peepz [09:45] <brobostigon> hi knightwise [09:46] <knightwise> just installed 18.04 on my old imac [09:46] <knightwise> very impressed so far [09:49] <brobostigon> :) [09:56] <knightwise> also installing it on my xps13 , cant use the windows version on my xps because of the GDPR [09:58] <brobostigon> i havent tried it yet, might roll a live usb before i upgrade, to test things out. [09:58] <zmoylan-pi> i usually wait a week or two after the release in case there are any whoopsies [09:59] <brobostigon> yes, hence my precaution of testing prior also. [10:00] <knightwise> its pretty clean . Its amazing how fast gnome/unity is right now [10:00] <knightwise> even on a dual core imac with 4 gigs of ram and a 128ssd [10:01] <knightwise> still no bluetooth love though. i think it has something to do with the firmware of the bluetooth chip of my xps [10:01] <knightwise> so no bluetooth mouse :( [10:02] <brobostigon> :( [10:02] <brobostigon> i had problems like that with the wifi on my ibm thinkpad. [10:02] <knightwise> which is a shame if you have a 1200 euro top of the line laptop and need to plug in an IR receiver for your mouse [10:02] <zmoylan-pi> i'm not fan of bt mice or keyboards. you think about a problem, you come up with a solution. you start typing and have to wait 5 seconds for bt to unsuspend... :-/ [10:03] <knightwise> Hmm.. dont have that problem very often [10:04] <zmoylan-pi> i've seen it on every bt keyboard so far and i've seen a fair few. haven't tried apple keyboard mind and they may have added a few shortcuts to make it more elegant [10:05] <knightwise> does anyone else have BT issues with their XPS ? [10:05] <zmoylan-pi> is anyone awake with an xps you mean :-) [10:48] <knightwise> zmoylan-pi: correct :)
UBUNTU_IRC
As part of the GMS contract for 2019/20 a new 'Quality Improvement' domain has been introduced which includes 'End of Life Care'. End of Life Care QI003. The contractor can demonstrate continuous quality improvement activity focused upon end of life care as specified in the QOF guidance. QI004. The contractor has participated in network activity to regularly share and discuss learning from quality improvement activity as specified in the QOF guidance. This would usually include participating in a minimum of two peer review meetings. Practices will need to: - Evaluate the current quality of their end of life care and identify areas for improvement – this would usually include a retrospective death audit (QI003) - Identify quality improvement activities and set improvement goals to improve performance (QI003) - Implement the improvement plan (QI003) - Participate in a minimum of 2 GP network peer review meetings (QI004) - Complete the QI monitoring template in relation to this module (QI003 + QI004) How to do a retrospective death baseline analysis (audit) Practices should review a sample of X deaths over the previous 12 months to establish baseline performance on the areas of care listed above and to calculate their expected palliative care register size. A suggested template to support data collection for the audit can be downloaded from here. The number of deaths each year will vary between individual practices due to differences in the demographics of the practice population. Practices could use the number of deaths reported in their practice populations in the previous year to assess how well they are identifying patients who would benefit from end of life care. An audit standard against which to assess current practice would be that the practice was successfully anticipating approximately 60% of deaths. There are reports available which can be accessed at 'Ardens > Conditions | Frailty and End of Life > Activity Last Year'. Information about the Ardens 'End of Life and Palliative Care' template can be found here. End of Life register reports are available at 'Ardens > Team | Meetings > End of Life'. End of Life Report Output To enable practices to breakdown data quickly and easily there is a report output setup called 'End of Life'. How to use a Report Output - Run your chosen report - Right click on the report and select 'Show Patients' - Just above the report list click on 'Select Output' - A new window will open, select 'Pre-defined report output - Select 'End of Life' and click 'OK' You should then be able see all the relevant data on one screen which you can easily export to excel.
OPCFW_CODE
Message from a shy user How can I make sure I'm not distracting researchers from their work, with a bunch of questions. I feel I could ask a lot, primarily because I have studied very little. I don't think I could ask a lot, if I had studied a lot. I don't really think my questions will not permit the descriptive aspects of the science to progress. I, also, think, my questions, might lead to progress. Am I invited to ask, on this site? Thanks. That's actually a great question. I think the is a balance, and whether to post or not is something you will have to decide about. Here are some considerations before posting... You should not ask questions that are already answered on this (or other) stack exchange site. This necessarily requires you do some sort of research. I think (at least) you should try to google the question phrased in a bunch of different ways. It's fairly possible that the answer does exist online, but you just don't know what to search for, but that is fine, but it should be clear that you (or the poster) made an effort to find the answer your/themselves. There is also the factor of "how much useful is this going to be for others". Those that are of general interest are always very welcomed questions. You can see that some of the old questions are really heavily used resource, while some other are so specific are seen only few times. So, more general question you will have, the better. Sometimes it is good to think if you can phrase your question in a more general way. Finally, consider how frequently you post yourself and how keen are people to help you out (I think there is a bit of common sense). If it feels all alright, it probably is. SE folks are not the subtlest in explaining to people what's sub-optimal about their questions. So to sum up, you can ask as many relevant questions as you want, even if they are basic, if you don't annoy the heck out of the community (and you will be able to tell). Stuff to make people like your questions: Be concrete, give details, but cohesive and preferentially reproducible (with toy data and small snippets of code). Show initiative and do your own research, that will both help others to understand the true source of your problem, but also make you more relatable. Disclaim everything that should be disclaimed - is it homework? Are you a developer of the method you are discussing? etc. Be polite and kind, but don't write "Thanks" or apologize for basic questions, that is all fine. Gratitude should be expressed through upvotes or accepting the right answer (that makes the answer more useful for the next person facing the same problem). So don't worry THAT much, just ask :-)
STACK_EXCHANGE
How to test new nameserver before to make it live Possible Duplicate: Testing nameserver configuration using it I'm thinking of changing from my hosting provider nameservers to Route53 (Amazon's distributed nameserver) for several reasons. I'm currently setting all records like they are on my current host page (I can see my DNS settings but I cannot change them). Since I'm not used to working with Route53's hosted zones, is there a way I can test the new nameserver resolution settings before updating the domain to point to the new nameserver? For example, I'm not sure if the last dot in CNAME records is necessary or not... Use dig dig mydomainname.example @mynewnameserver.example You can easily do that using nslookup, the process is next: 1) Enter nslookup 2) Run server $YourDNSServerName where $YourDNSServerName is the one of the DNS servers responsible for your zone at Route53 like nslookup ns-131.awsdns-16.com 3) From there just enter your records and see the responses. Thanks, it works!! Can you please help me to understand what should i set for these records? @ IN MX 1 aspmx.l.google.com. @ IN MX 5 alt1.aspmx.l.google.com. @ IN MX 5 alt2.aspmx.l.google.com. @ IN MX 10 aspmx2.googlemail.com. @ IN MX 10 aspmx3.googlemail.com. I should leave the domain blank, am I right? yes if you're setting MX records for the domain, the domain should be blank. Use dig and not nslookup for any serious DNS debugging. nslookup has many known flaws and has been depreciated. http://veggiechinese.net./nslookup_sucks.txt Set up your name server and then configure a test machine to use it for DNS. I'm not sure what client system you're using, but since I'm on a Windows box, at the moment, a screenshot of where you'd put that new nameserver value on a Windows machine. Apply those DNS settings to a number of machines for testing purposes, use as normal, and ensure nothing's broken. This will not work. This screen allows you to enter the recursive nameservers you want to use. This is certainly not the way to test new authoritative nameservers (a lot of things will start to break as you will get DNS replies for only a very small subset of names, or even none at all). Why wouldn't that work? The flow goes: 1. Change Windows to use new authoritative nameservers. 2. Test resolution of relevant domain names. 3. Change back to some recursive nameservers. - During number 2 you won't be able to resolve most other domain names so you might want to be aware if you're testing a website that some content such as from a CDN may not load but apart from that it seems fine to me.
STACK_EXCHANGE
Or at least the start of my career in IT… I was fortunate and had a job offer lined up. My college was big on internships. I don’t blame them, it was a great way to start in your field and get some experience. Figure out what you like and perhaps even more important, what you don’t like. I started college as a Information Security major. Red Team vs. Blue Team, intrusion detection, ethical hacking. That was what high school me thought I wanted to do. That dream faded my sophomore year after some grueling network courses. I understood the fundamentals and the practical side of it all. But once we got into RSTP and BGP, I lost interest pretty quickly. This was compounded by a foreign professor who decided to take a vacation for the first few weeks of the semester leaving us to struggle through complex labs with little instruction or feedback. I made a small pivot just in time for a internship in systems administration. While I had a bit less course work in this particular area. I was a quick read and motivated to learn. My job was to sit on an outlook mailbox and process user account creations/deletions and permission changes. Exhilarating. The actual process required a lot of data entry and providing paper-trails for auditing purposes. The same date would be entered multiple places which led to the potential for human error. This was data entry. There was little technical skill required once you had the process down. I got to the end of the first week and said to myself. I’ve gotta find an easier way to do this. The systems I was using were primarily AIX, an old IBM offshoot of Unix popular in the enterprise space. Lacking the modern conveniences of modern bash environments… I was left with Kornshell. Not being familiar with it. I simply looked for a way to automate my own input to the remote sessions that I was spinning up with putty all day. Luckily putty allows you to pipe in commands, and thus V1 of automating my job was born. putty.exe [email protected] -m c:\local\path\commands.txt I slapped some PowerShell in front to query for the parameters that I needed, saved the output to a file, then passed the file off to putty to get executed remotely. Voila! Quick and dirty. I blasted through the backlog of work left by the previous intern in just a day or two. And just as quickly found myself sitting at my desk with nothing to do. Version 2 came with some upgrades. I looked up some kornshell, and built out a little CLI for it all. Made some scripts for the common tasks. For instance with password resets, all I had to was enter the username and the request number. The script would generate a temp password, reset it, unlock the users account if locked, email the user, and save off the log for auditing. This freed me up to take on some real work.. aka not intern work. My team saw the work I put in and while ultimately I didn’t get a position on this team. I did stay with the company, and not in system administration, but in development.
OPCFW_CODE
Yesterday I spent a lot of time,faffing around, trying to establish the best way to allow guests access to our Internet connection without compromising our network, while at the same time being able to filter the content they can access online....... Now, I still feel like I'm back at square one after contemplating solutions that appear to be too costly, or have a low chance of success given my current setup. I'd like to achieve the following:- A guest wireless access point running >separate SSID from existing network >The ability to filter content >User access restriction...........(voucher/generated pass key's/ accounts would be a bonus) The reason I'm looking to have the above implemented is to that I can eventually allow staff access to it aswell as guests, for use during their breaks. Any advice would be appreciated! Depends what your current network looks like. If you have a professional firewall/content filtering appliance, you might be able to set this solution up without any extra hardware. Lets say for a minute that you don't though, you could provision an old machine, install a community firewall distro (Endian would work fine), and have it running as a DHCP server, firewall, content filter etc, set it up to run on a different subnet, if your using 255.255.255.0 atm, then use 255.255.248.0 or something, and use a different IP range, again if you have something like 10.0.0.* at the moment, then use 172.16.*.* or 192.168.*.*, at that point you'll have a working solution to the point of where the guest network meets your work network. How to proceed beyond that is relative to your situation, what is your current network setup at the moment? Do you have a firewall at the edge, and what do you have for employees in terms of content filtering, i'm curious, because if you have say for example, forced authentication with a proxy for Internet access, that integrates into ADS, or any other number of possible scenarios in place then you'll run into issues trying to get the guest network out to the Internet because they'll be using non domain accounts; you get the picture. How is your work network setup at the moment? I can see ways to do some of what you want, but not all of it from your existing set-up; and I don't think it would possible to make it flexible enough to then add staff access later. As previously advised, I still think that your best option would be the Bluesocket system. I've used these people before http://www.westcomnetworks.co.uk/ they are very knowledgeable, easy to work with and very helpful. Give them a call and ask if they can arrange for a demo of the Bluesocket equipment - I'm sure that they would be more than willing to talk to you about it and even lend you a device to test out. CPR + more is an IT service provider. Do you have a UTM? If not Look into the Sophos UTM. It will help secure your network, give vpn access and on top of that it works as a wireless controller (for sophos brand WAPS). You can create several wireless networks and do some nifty stuff with guest access. Here is the link: http:/ if you have any questions please feel free to ask. Ubiquiti also makes a great solution for Wifi and Guest networks at a VERY reasonable price. Check out their Unifi series Access points. Their standard access points start around $75.
OPCFW_CODE
Yesterday I was trying to install the new SharePoint 2010 beta on a virtual machine and had a little bit of fun. Well actually the install went really easy and I started with the full SharePoint 2010 Enterprise Beta. The fun started with the configuration wizard it got 5 steps in and failed with a Timeout Exception. After simply retrying it and getting the same thing I started looking around and found this forum post. It talks about lack of memory and how SharePoint 2010 needs a lot of memory to install and I tried like mad to make sure the VM had enough memory. End of the story this turned out to not be what was causing my blocking issue. As a side note, I sure hope the SharePoint team doesn’t make it so SharePoint does need that much memory for a basic install as suggested. There is a good blog post by Jie Li that was linked to by the above forum post that was helpful. First it had the product keys – Microsoft mailed me some but it took over 24 hours for me to get them (same ones) and nothing on the download pages really told me where to find them (at least I didn’t see it). The post also has some hotfixes that you have to have depending on your OS and configuration. Additionally, if your running on a domain controller which I was it had some setup to get the sandbox up and running. Now back to the timeout exception, it was still happening and honestly if I didn’t need SharePoint for something I’m working on I would have thrown it to the side and not looked back. Being determined I tried several different memory configurations and determined it had nothing to do with memory, and further through SQL profiling determined it wasn’t a database timeout either. From this forum post I got the idea it might be a service start up issue. Originally, I didn’t pay enough attention to this post because it talked about a type not loading witch didn’t match my error. Later in the post it talks about service not starting and to try it manually, that didn’t show the error either. Additionally, it talked about registry keys to delete which turned out I didn’t have those keys. Getting frustrated and desperate to get this configured I started to get more creative. Since I was on a virtual machine I figured worse case I restart from last snapshot so I got brave and deleted a key at a time. For me the following key did the trick and the config zoomed along past the error to a successful completion. So which key? Inside the following registry path.. [HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Shared Tools\Web Server Extensions\14.0\WSS\Services\ I deleted the following key I’m normally not one to post about or suggest this type of black magic fix, but in this case it made the difference of me using the beta or not so here it is.
OPCFW_CODE
Custom Govt Compliant Websites Component-based programming has grown to be extra well known than in the past. Barely an software is created today that does not involve leveraging components in certain variety, generally from diverse distributors. As apps have grown much more refined, the need to leverage parts distributed on distant equipment has also grown. An example of a component-based software is undoubtedly an end-to-end e-commerce option. An e-commerce application residing on the Website farm requires to post orders to your back-end Company Resource Setting up (ERP) application. In lots of circumstances, the ERP software resides on distinct components and might run on a various operating system. The Microsoft Distributed Part Object Model (DCOM), a distributed object infrastructure which allows an software to invoke Component Item Product (COM) parts set up on one more server, has actually been ported to a number of non-Windows platforms. But DCOM has not attained huge acceptance on these platforms, so it really is hardly ever made use of to aid interaction amongst Windows and non-Windows desktops. ERP application vendors normally develop elements for that Windows system that talk along with the back-end process by way of a proprietary protocol. Some companies leveraged by an e-commerce application may possibly not reside in the datacenter in the least. One example is, in the event the e-commerce software accepts credit card payment for goods purchased because of the client, it should elicit the companies with the service provider financial institution to method the customer's bank card details. But for all practical functions, DCOM and linked systems this kind of as CORBA and Java RMI are limited to apps and parts mounted in the corporate datacenter. Two key motives for this are that by default these technologies leverage proprietary protocols and these protocols are inherently connection oriented. Clients speaking using the server via the internet facial area many prospective limitations to speaking with the server. Security-conscious network administrators around the world have applied corporate routers and firewalls to disallow basically each and every type of interaction over the web. It often can take an act of God for getting a community administrator to open up ports outside of the bare minimum. If you're fortunate more than enough to get a community administrator to open up the suitable ports to guidance your support, likelihood is your clientele will not be as fortuitous. Like a consequence, proprietary protocols these people used by DCOM, CORBA, and Java RMI are usually not realistic for Internet eventualities. The other challenge, as I reported, using these systems is the fact they are inherently connection oriented and so can not deal with network interruptions gracefully. Because the World wide web is not below your direct control, you cannot make any assumptions in regards to the good quality or dependability from the connection. If a community interruption occurs, the next contact the client tends to make for the server could fail. The connection-oriented nature of these systems also makes it demanding to create the load-balanced infrastructures required to attain significant scalability. As soon as the connection in between the shopper plus the server is severed, you can not only route the subsequent ask for to another server. Developers have made an effort to conquer these constraints by leveraging a model called stateless programming, nevertheless they have experienced minimal accomplishment since the systems are fairly hefty and make it pricey to reestablish a link having a remote object. Because the processing of a customer's bank card is completed by a distant server on the World-wide-web, DCOM will not be perfect for facilitating interaction concerning the e-commerce consumer and also the bank card processing server. As within an ERP option, a third-party component is commonly mounted in the client's datacenter (in such a case, from the credit card processing option service provider). This ingredient serves as tiny in excess of a proxy that facilitates interaction between the e-commerce software package as well as the service provider lender through a proprietary protocol. Do you see a pattern listed here? Due to the limitations of current systems in facilitating conversation between laptop or computer methods, computer software distributors have usually resorted to constructing their own individual infrastructure. What this means is methods that can happen to be used to add enhanced features into the ERP process or the credit card processing process have instead been devoted to producing proprietary community protocols. In an effort to better guidance these kinds of Net eventualities, Microsoft in the beginning adopted the method of augmenting its present technologies, which include COM Online Providers (CIS), which allows you to set up a DCOM connection in between the client plus the distant element above port 80. For numerous good reasons, CIS was not extensively approved. It grew to become crystal clear that a completely new technique was necessary. So Microsoft made a decision to deal with the issue within the bottom up. Let's evaluate several of the necessities the solution needed to meet in order to realize success. - Interoperability The distant company needs to be capable to be eaten by shoppers on other platforms. - Internet friendliness The solution really should get the job done effectively for supporting purchasers that accessibility the remote support with the World-wide-web. - Strongly typed interfaces There ought to be no ambiguity concerning the kind of information despatched to and been given from a distant service. On top of that, datatypes defined through the remote support ought to map moderately very well to datatypes described by most procedural programming languages. - Ability to leverage current Net standards The implementation from the distant service need to leverage current Net specifications as much as you can and keep away from reinventing alternatives to difficulties that have presently been solved. An answer designed on commonly adopted World-wide-web criteria can leverage present toolsets and merchandise designed for the know-how. - Support for almost any language The answer shouldn't be tightly coupled to some specific programming language. Java RMI, such as, is tightly coupled to your Java language. It could be challenging to invoke functionality with a distant Java object from Visible Basic or Perl. A client must manage to put into action a fresh Web assistance or use an current World wide web company regardless of the programming language in which the shopper was written. - Support for almost any distributed part infrastructure The answer should not be tightly coupled to the individual ingredient infrastructure. Actually, you shouldn't be necessary to order, install, or keep a dispersed item infrastructure just to create a completely new distant provider or eat an present service. The fundamental protocols need to aid a base volume of conversation concerning present dispersed object infrastructures such as DCOM and CORBA. Given the title of this guide, it should really occur as no shock that the answer Microsoft produced is understood as Website services. An internet company exposes an interface to invoke a specific action on behalf of the shopper. A customer can access the web support via the usage of World wide web benchmarks. Web Companies Building Blocks The next graphic shows the main creating blocks required to facilitate distant conversation involving two programs. Continue Reading US Government Contractor
OPCFW_CODE
Jump to content Posted 01 February 2013 - 07:05 Posted 01 February 2013 - 15:59 Posted 01 February 2013 - 20:23 Posted 03 February 2013 - 10:20 What driver are you using? I am/was using the 13.1 driver from AMD's website. Weird, I have no problem like this, it might be driver related (though, we share the AMD GPU brand),I am not a pro. The first and third problems, at least, are most likely a result of the proprietary AMD graphics drivers. I would highly recommend purging them and using the open-source radeon driver instead. Your video card is very well supported by radeon, and you will almost certainly have fewer problems with it. Edit: I recommend that you read through this thread. It has lots of interesting details that you may find helpful. Posted 03 February 2013 - 17:11 I removed fglrx and it did indeed fix the flicker & virtual terminal issues, it seems to have fixed my issue with resuming from suspend as well. The only downside with what I'm using now is the performance in games is bad. My output from glxinfo | grep renderer is OpenGL renderer string: Gallium 0.4 on AMD CYPRESS, is that correct? I chucked together two scripts, one to install fglrx (when I game) and one to swap back to radeon (when I'm not), I'm guessing I'd need a reboot in-between but that's not really an issue. Would this be a sensible solution or would it cause issues in the long run? Posted 03 February 2013 - 20:01 That is an absolutely terrible idea! I strongly recommend that you don't swap drivers on a regular basis. If you really feel like you MUST swap drivers when you game, the least-bad idea is probably to install fglrx from the repository, generate a xorg.conf to force X11 to use radeon when you start your computer, then create a script to stop X11, load X11 with fglrx (and maybe a low-resource, non-compositing window manager, such as Openbox, to get higher framerates) so that you can game. Your script should probably be capable of switching back as well. Posted 04 February 2013 - 00:44 Posted 04 February 2013 - 17:11
OPCFW_CODE
package com.alphasystem.docbook.builder.test; import org.docbook.model.*; import java.util.ArrayList; import java.util.List; import static java.lang.String.format; /** * @author sali */ public final class DataFactory { private static ObjectFactory objectFactory = new ObjectFactory(); public static Emphasis createBold(Object... content) { return createEmphasis("strong", content); } public static Caution createCaution(Object... content) { return objectFactory.createCaution().withContent(content); } public static Entry createEntry(Align align, BasicVerticalAlign vAlign, Object... content) { return createEntry(align, vAlign, null, null, null, content); } public static Entry createEntry(Align align, BasicVerticalAlign vAlign, String nameStart, String nameEnd, String moreRows, Object... content) { return objectFactory.createEntry().withAlign(align).withValign(vAlign).withNameStart(nameStart).withNameEnd(nameEnd) .withMoreRows(moreRows).withContent(content); } public static Emphasis createEmphasis(String role, Object... content) { return objectFactory.createEmphasis().withRole(role).withContent(content); } public static Example createExample(String title, Object... content) { return objectFactory.createExample().withTitleContent(createTitle(title)).withContent(content); } public static Important createImportant(Object... content) { return objectFactory.createImportant().withContent(content); } public static InformalTable createInformalTable(String style, Frame frame, Choice colSep, Choice rowSep, TableGroup tableGroup) { return objectFactory.createInformalTable().withTableStyle(style).withFrame(frame).withColSep(colSep) .withRowSep(rowSep).withTableGroup(tableGroup); } public static Emphasis createItalic(Object... content) { return createEmphasis(null, content); } public static ItemizedList createItemizedList(String id, Object... content) { return objectFactory.createItemizedList().withId(id).withContent(content); } public static ListItem createListItem(String id, Object... content) { return objectFactory.createListItem().withId(id).withContent(content); } public static Literal createLiteral(String id, Object... content) { return objectFactory.createLiteral().withId(id).withContent(content); } public static Note createNote(Object... content) { return objectFactory.createNote().withContent(content); } public static OrderedList createOrderedList(String id, Object... content) { return objectFactory.createOrderedList().withId(id).withContent(content); } public static Phrase createPhrase(String role, Object... content) { return objectFactory.createPhrase().withRole(role).withContent(content); } public static Row createRow(Object... content) { return objectFactory.createRow().withContent(content); } public static Section createSection(String id, Object... content) { return objectFactory.createSection().withId(id).withContent(content); } public static SimplePara createSimplePara(String id, Object... content) { return objectFactory.createSimplePara().withId(id).withContent(content); } public static Subscript createSubscript(String id, Object... content) { return objectFactory.createSubscript().withId(id).withContent(content); } public static Superscript createSuperscript(String id, Object... content) { return objectFactory.createSuperscript().withId(id).withContent(content); } public static Table createTable(String style, Frame frame, Choice colSep, Choice rowSep, Title title, TableGroup tableGroup) { return objectFactory.createTable().withStyle(style).withFrame(frame).withColSep(colSep).withRowSep(rowSep) .withTitle(title).withTableGroup(tableGroup); } public static TableBody createTableBody(Align align, VerticalAlign verticalAlign, Row... rows) { return objectFactory.createTableBody().withAlign(align).withVAlign(verticalAlign).withRow(rows); } public static TableGroup createTableGroup(TableHeader tableHeader, TableBody tableBody, TableFooter tableFooter, int... columnWidths) { List<ColumnSpec> columnSpecs = new ArrayList<>(); for (int i = 0; i < columnWidths.length; i++) { ColumnSpec columnSpec = objectFactory.createColumnSpec().withColumnWidth(format("%s*", columnWidths[i])) .withColumnName(format("col_%s", (i + 1))); columnSpecs.add(columnSpec); } return objectFactory.createTableGroup().withCols(String.valueOf(columnWidths.length)).withTableHeader(tableHeader) .withTableBody(tableBody).withTableFooter(tableFooter).withColSpec(columnSpecs); } public static TableFooter createTableFooter(Align align, VerticalAlign verticalAlign, Row... rows) { return objectFactory.createTableFooter().withAlign(align).withVAlign(verticalAlign).withRow(rows); } public static TableHeader createTableHeader(Align align, VerticalAlign verticalAlign, Row... rows) { return objectFactory.createTableHeader().withAlign(align).withVAlign(verticalAlign).withRow(rows); } public static Term createTerm(Object... content) { return objectFactory.createTerm().withContent(content); } public static Tip createTip(Object... content) { return objectFactory.createTip().withContent(content); } public static Title createTitle(Object... content) { return objectFactory.createTitle().withContent(content); } public static VariableList createVariableList(String id, Object[] content, VariableListEntry... entries) { return objectFactory.createVariableList().withId(id).withContent(content).withVariableListEntry(entries); } public static VariableListEntry createVariableListEntry(ListItem listItem, Term... terms) { return objectFactory.createVariableListEntry().withTerm(terms).withListItem(listItem); } public static Warning createWarning(Object... content) { return objectFactory.createWarning().withContent(content); } }
STACK_EDU
Director of Big Data Engineering @ From 2015 to Present (less than a year) Principal Software Engineer @ From June 2012 to September 2015 (3 years 4 months) Senior Software Engineer, Cloud R&D @ - Designed, prototyped and built production version of the new EnergyScape web site using Node.js / MongoDB on Senior Software Engineer with Lead experience. Director of Big Data Engineering @ From 2015 to Present (less than a year) Principal Software Engineer @ From June 2012 to September 2015 (3 years 4 months) Senior Software Engineer, Cloud R&D @ - Designed, prototyped and built production version of the new EnergyScape web site using Node.js / MongoDB on Linux stack. This customer portal will further SCIenergy’s business development while demonstrating their core mission in a powerful and actionable way. - Made EnergyScape.net a compelling mobile web experience, leveraging Bootstrap from Twitter and Require.js, to enable benchmarking for business managers and data acquisition for “Building Engineers” without the anchor of a computer. - The project was deployed in the Amazon Cloud (AWS) leveraging EC2 and its load-balancing feature, S3, CloudFront, SES and CloudWatch. This Node.js solution running in AWS enabled rapid deployments to a cluster of servers with very minimal infrastructure costs. - Built deployment script to automate daily code pushes proving essential to the “release early / release often” approach. - Set up Mercurial and ticketing system through Bitbucket.org to enable better cooperation and communication with the contractors and the product manager on the project. - Acted as project manager and lead developer delivering on-schedule and completion of the Beta phase. Key milestones were met enabling successful customer and industry partner presentations From November 2011 to August 2012 (10 months) Chief Architect @ From September 2010 to December 2011 (1 year 4 months) Senior Software Engineer @ - Implemented Hadoop Map/Reduce jobs to feed data into FAN’s audience insights system. - Built and maintained multiple external and internal facing Web and Stand alone Java applications using popular frameworks and libraries including but not limited to Spring (MVC), JSF, Hibernate, Ibatis, Quartz, Google Protocol buffer, JUnit, EasyMock, Powermock and tools including but not limited to Eclipse, Tomcat6 and Maven2. - Mentored junior Java and .Net developers during my 3+ years at FAN to increase - Initiated a .Net weekly study group to go over newer aspects of the framework and potential use in our projects. - Interfaced the remote half of my team in Atlanta with the local QA, release, dba and ops teams to ease communication and remove contention points. - Refactored, improved and maintained FAN’s white label Social Network system written in C# against numerous vertically partitioned SQL Server databases, which was available on various Newscorp’s entities’ web sites like Fox News, American Idol, Fox Weather, Fox Highlights etc... From March 2007 to September 2010 (3 years 7 months) Senior Software Developer @ Designed, created and implemented enhancements and new functionalities to the Bureau of Labor Statistics’ (“BLS”) software application called TopCati using VB.Net 1.1. TopCati is a distributed application to collect US workforce data including management functionalities using Oracle 9g and MS Access 2003 as database back ends and Crystal - Modified Oracle database to fit new structure and performance needs and maintained and troubleshoot TopCati source code. - Advised project manager on .Net software architecture, increasing product stability and - Created tools to increase productivity and efficiency between BLS national office and regional data collection centers. Tools written in VB.Net and C# include: ASP.Net website to manage Crystal Report files and outputs, encryption tool for sensitive Web.config information, filter builder to automate TopCati filter creation and Windows service to monitor and replicate folder and file structure between servers for application mirroring. - Initiated and lead a weekly .Net 1.x and 2.0 study group to facilitate knowledge transfer amongst colleagues. From July 2006 to February 2007 (8 months) Lead Software Developer @ Managed the successful design and development, on deadline and under compressed schedule, of a Migration Wizard tool using .Net 2.0, to convert current Info Pak customers to next generation products, increasing RWD revenues and retaining market share. - Researched, recommended and implemented refinements to products converted from Visual Basic 6 to .Net, capitalizing on inherent efficiencies in .NET to enhance RWD’s Info Pak - Maintained and enhanced the Info Pak suite over time yielding high customer satisfaction. In 2005, sales were 188% of goal. As of 1st Quarter 2006, maintenance contract renewals exceeded the forecast to date. - Served as Tier3 product support for Info Pak to troubleshoot customer issues, yielding close to a 100% customer satisfaction rate with personal testimonials from Johnson & Johnson and Home Depot, expressing satisfaction with the quality of support received. - Trained junior developers on Info Pak and RWD procedure, increasing development team productivity through cross-training. - Mentored junior developers on VB.Net, C#, ADO.Net and web-oriented technologies such as ASP.Net and Web Services, fostering teamwork and increasing the team knowledge base. - Created several internal RWD software tools such as product support problem diagnosis and SAP payroll output formatting. Presented and distributed these tools for RWD employee use. From November 2004 to July 2006 (1 year 9 months) Software Tester @ Tested products for Welocalize’s client Manugistics, finalizing and perfecting their cutting-edge software technologies. - Sorted and resolved client software issues and trained new testers on Manugistics’ products. From June 2004 to November 2004 (6 months) Analyst-Programmer (Object Oriented Programmer) @ Enabled completion of the company’s core product, “MP’Com,” a multi-platform, multi- protocol, file transfer automated system. Created functions such as data encryption and decryption, data compression, PDF conversion and data sorting. - Conceptualized, proposed and developed “MP’Event Manager,” a multi-protocol software in Delphi6, to immediately inform MP’Com administrators of pre-set MP’Com events, enabling managers to respond rapidly. Developed “MP’Spawn” in Delphi6 to perform certain tasks automatically upon receipt of a notice from MP’Event Manager, on Windows or on Linux through Telnet remote control. These products generated over 60,000 Euros for the company. - Innovated a “light” version called “MP’ComPro” using Delphi6 and Kylix2, to serve small business needs. Enabled the company to penetrate a new market niche, increase overall market share, and generate 42,000 Euros in revenues. Used on either Linux or Windows with LAN and FTP protocols, MP’ComPro provides key functions such as the encryption and compression of data. - Recognized critical elements missing from Eukles’ IT operations and corrected them. Revised the company’s network to improve test quality and employee productivity. Implemented a backup server and a file share system. Created the company’s first PHP/MySQL intranet hotline database. From April 2002 to November 2003 (1 year 8 months) Analyst-Programmer (Object Oriented Programmer) @ Managed and developed the research and planning capabilities of one of PS’Soft’s principal products, the “Qualiparc Business Process Manager,” a DLL for Microsoft IIS that streamlined business processes for companies with an average starting point of 500,000 end functionality into the DLL using Delphi5, and ensured compatibility with Database Management System, Oracle, SQL Server, Sybase and DB2. From July 2001 to January 2002 (7 months) Conservatoire National des Arts et MItiers From 2001 to 2003 BTS, Computer Engineering; Analyst @ LycIe Estienne d’Orves From 1999 to 2001 Yann Luppo is skilled in: .NET, Java, Node.js, MongoDB, Hibernate, Spring, ASP.NET, jQuery, Amazon Cloud, Microsoft SQL Server, PostgreSQL, MySQL, Membase, RabbitMQ, Mercurial Looking for a different Get an email address for anyone on LinkedIn with the ContactOut Chrome extension
OPCFW_CODE
M: Instant Company - jstedfast http://nat.org/blog/2011/06/instant-company/ R: nikcub I think these 'what products and services does your startup use' type articles are more interesting than the usesthis series about what tools developers use. Somebody should setup a blog where they interview a startup founder each week and just ask them to list services they use along with a mini single-paragraph review of each. Edit: after thinking about it, I might just do this as a weekend project. A quick search and I couldn't find anything similar, the closest I remember is the Ajaxian blog startup interviews which they stopped doing. If you would like your startup featured email me, ill be reaching out to a few people so if there is interest I will likely get it going R: mattmanser These pop up quite often and personally I find them quite boring. A lot of it is personal choice, e.g. IRC & campfire being 'laggy', for me Google apps is meh apart from mail/calendar, you better pony up for MS office if you're dealing with a lot of other businesses, themeforest I find extremely hard to find a decent looking, _well written_ html template, most of them are div crazy, extremely heavy CSS/js payloads or use cufon, kerrschpitt. And assistly looks like a total rip off at $69 p/m per user (to _me_ anyway). I mean swipe might make an interesting submission in itself, but the homepage is light on details, looks like it's in a closed beta, which probably means US only, no good for me. Anyway tl;dr is that the tools your business uses are very personal choices of services many of us already know about, I find them dull. What's more interesting is what's missing, no accounting system, no bug tracking, no server uptime monitor, no analytics, no A/B testing. R: patrickod You're right; Swipe is in closed beta at the moment. An email never hurts though R: seats Great list, but to me the last two items aren't like the others. Everything about starting tech companies has gotten easier and cheaper, but accountants and lawyers haven't really changed all that much. He didn't specify exactly how much they are paying for those two, but it still sounds like it will be a fairly beefy hourly rate or a retainer + equity. I think for a boostrapped company these are still your two really big overhang costs where people end up weighing going without or dyi versus committing to legal or accounting as your biggest up front operating expense. Of the two, I'd say accounting has probably changed the most, there are plenty of workable software solutions for keeping books that aren't too bad and it seems like there are plenty of people trying to build startups around that particular problem. Can't say the same on the legal item though. R: mcdowall Great list! Using a few of those myself If i can be cheeky I'd love an intro to the guys at Stripe, think it was a fair few months ago I registered my email for their Beta and would love to implement it for my startup. R: saikat Hey (Saikat from Stripe) -- not cheeky at all, but certainly flattering. Sorry we've been kind of quiet (we do read Hacker News, though). We're just working hard to implement the feedback we've been getting from our existing users, and we want to make sure our product scales well and gets better as new people use it. Here's a question: any chance you would be interested in having us watch you integrate Stripe? We've been doing this lately to try to make sure our first- run experience is really good. Send me an e-mail () either way. R: s00pcan Stripe was something on the article I hadn't heard of before. It just seems so logical for cardholder information to go directly from the customer to the payment processor using javascript that I wonder why it hasn't been done before and what you're doing differently. Can you explain? R: kolektiv Well, hosted payment is not a new thing at all - so you iframe or link to a page you don't host which the customer uses - thus ensuring that card details don't hit your servers and don't give you a PCI surface. This is a fairly logical extension, I would guess that the reason it hasn't caught on more is because a JS requirement has typically been a red flag in e-commerce - 3% of users not being able to pay you once they got to that point of a funnel could be seen as disaster. Interesting, because we're looking at mandating JS in our new developments (background: company I work for does a lot of high end e-commerce - we're specialists). In theory it's a good idea (that side of it at least) but I don't know how security perception and customer acceptance rates will go. R: s00pcan Oh, of course. I completely forgot that there are some crazy people out there who browse without javascript. I was just jumping at the idea of reducing PCI compliance issues - I've had to deal with them and it's a huge project. R: there now someone needs to make something to use the APIs of all these sites to be able to control users across all of them from a single location. bringing on new employees or terminating existing ones and having to do it across half a dozen different sites sounds kind of tedious and error-prone. R: tripzilch Great point. I noticed the same thing. First you get your Google Apps account, and then the passwords for the other accounts are mailed to there, then two weeks later you find that one of the systems has been replaced in favour of another one. Indeed tedious and error-prone. And that's just from the employee's point of view, the administrator having to create all these different accounts is probably even less happy about it. R: benjohnson eFAX !?!?? eFAX is evil when you try to close your service - you have to go through their horrid 'chat' system and even then I had to cancel my credit card to get them to stop charging. And no... it's not just me: <http://daviddahl.blogspot.com/2006/05/efax- sucks.html> R: rabidonrails Launched <http://phaxio.com> into beta a couple of weeks ago...shoot me an email if you'd like an invite (email in profile) R: pbreit Any way to upload or email a PDF? R: rabidonrails Absolutely! We have an API that allows you to POST files to fax. R: kinkora For a web-based company, I would add Amazon Web Services(AWS) at the top of the list. AWS is relatively expensive but if you are a startup with a limited amount of capital and need to scale quickly, it allows you to utilize a corporate grade web/computing/server/database infrastructure without having to build one yourself. R: athst Interesting list, I'd be interested to see what other "stacks" companies are running on. R: spullara We don't list out all the business services, though we should add them now, but we do have our technology and services stack for production: <http://bagcheck.com/bag/382-bagcheck-technology> R: timsally It's an interesting contrast how cheap the technical tools are compared to the financial and legal skills retained. I'm not sure if Ropes & Gray does something special for early stage companies, but they are a top and expensive firm. R: statictype What advantage do these group chat apps have over something like Skype? R: alanh No spammers, for one. Skype's iOS app is absolutely terrible for chat, too; HipChat's is passable, and of course with IRC you will have a few options. R: vijaymv_in Amazing list. I am wondering how do you handle signatures R: omouse Their committment to free/open source software is astounding! </sarcasm> R: clistctrl I didn't really find the article that interesting, however looking at this <http://xamarin.com/> company I'm extremely intrigued by the product.
HACKER_NEWS
How fast should I be able to work through Spivak I am currently self-studying Spivak’s Calculus. Unfortunately I did not have the chance to take math courses in college so I haven’t been formally taught proof based mathematics-I’m trying to learn now from Spivak+a copy of the answer manual. I typically read a chapter twice, then jump into the problem set. I can only do a few of the early problems easily and all the way accurately, but I can make some progress on some of the later problems. Then I look at the answer for the first problem I can’t solve, copy it down, and try to understand why it works. Once I can prove it from memory, that proof technique is generally enough to let me prove the next several problems. When I get stuck in a section I once again look at one solution and often this lets me make great progress on the others. I repeat until I can do most of the non-starred problems and move on to the next chapter, or if I’m really stuck I take a break for a couple of weeks and when I come back it’s easier. However, since I have no standard of comparison, I’m not sure how to tell if I’m any good at this. Obviously there’s merit in doing math at any pace, but I’d still like to know if I’m struggling way too much (and should move to something easier), if I’m on pace, or if I’m doing really well. At about what pace and with what level of accuracy should a competent math student be moving through the problems of Spivak’s Calculus? How many hours/days/weeks should a chapter take me? Am I wasting my time, or is math just slow? Math is slow. And it takes a variable amount of time to get through. Math is the occupation of the patient. You are not working at an unusual pace. It would be easier if you have someone to work with, or better still, guide you. Different people have different aptitudes. @copper.hat Yup, and that aptitude isn't universal across all of math. Heck, aptitude for learning concepts varies even within texts. For me, what could take me a week to work through on my own could be 5 mins. with a (suitable) friend. I find this true not just in mathematics. @copper.hat Reminds me of some terrible nights learning Q Mech all by myself in my school library :( @DonThousand: I think I am incapable of learning on my own :-). Thanks for the responses, guys. Is there any good place on here to find such a buddy? @Samuel Sadly, no. There are nice chatrooms, though, if you want to talk out some problems. Alas. Well, I'm moving pretty slowly as I work another job ~70-80 hours a week, so I'm not sure it would be easy to find a buddy who works at my pace of nothing at all and then a huge burst on breaks/some weekends. Thanks anyway! Also, is it worth working through all of Spivak, or just certain chapters?
STACK_EXCHANGE
I will admit, the game originally showed promise and was rather Stanley Parable-esque, but it went down hill fast. It was mildly amusing - albeit slighlty painful - and continually expressed that it wasn't meant to actually be a game (so we can't exactly call it a bad game now, can we?) but instead an experiment. I doubt anybody will ever really know what said experiment was focussing on, but I do hope Anothink got the results he was looking for as the experience was bizarre, badly programmed and generally not very good. I mean seriously, what were those rooms with the Half-Life 2 Stalkers etc. about? They made no sense and were just not necessary in any way. The game is horribly designed. It's underlying mechanics are way too simple to make a compelling simulation, and work in a sometimes incomprehensible and illogical manner. The tools to properly work on titles are either not present or inaccessible at the stages they would normally be available at to game developers. Pressure builds in a silly manner due to misdesigned systems interaction (workforce management versus results in a stupidly compressed timeframe). If you **** up, you can basically start over, because correcting your mistakes so that they won't hurt you two hours down the line is nearly impossible. The progression is also painfully static. It basically forces you to replay, but doesn't offer any replay value to incentivize this. Overall, the game feels really horrible. It's unrewarding, the components are slapped together in a stupidly simplistic and obscure box with mediocre windowdressing. You can get hundreds of current games that have a WAY better value at this price. Weak. Fist, using a promo to get votes, lame. Second the wording of the big banner on the promo is made to look like anyone voting gets a free copy (purposefully deceptive). Just more less than honest tactics from a bad developer. I'll loose all respect for Desura and IndieDB if this guy wins anything. It can be quite fun but wow, it feels so unpolished and cheap and nasty. - No music or background sound most of the time. Silence. Feels really weird. - Some things don't have sound effects. E.g. hitting with an axe when there's nothing there. - Enemies are so stupid and move in the derpiest way imaginable (almost as if they are in an online game and you have lag). - Animation is somewhat primitive - The background tears nastily when scrolling - Can't redefine keys. "The narrative develops through the player's interaction with the game's world." I would like to better understand what interactions one can do? Until two thirds into the "experience" I could do nothing but move the mouse around an look. Sorry but I found this a complete waste of time. As I huge fan of Lunar Lander and the many games that drew inspiration from it I gotta say this game is a bit of a disappointment. No thrust/speed indication, no ship rotation, no fine tuned thrust control (it's digital, full off or full on), basically a fairly uninspired puzzle game. With work it has the potential to be fun, but right now I'd rather play lander or any of a number of other free Flash based games that do the concept much better I rarely if ever bother to write reviews but feel compelled to do so for this one in the hopes I save anyone on the fence the $4 entry fee to something that really doesn't seem to me to be worth the cost. Personally, while it seems pointless and frustrating, I think there is a deeper meaning to this. I think it is a statement saying how no matter how bad a game is, we play, and keep playing, it to level up, gain XP and unlock literally meaningless perks, it's our human nature, to accquire things. Not saying the game is bad persay, just interesting.
OPCFW_CODE
Does Stack Overflow support gracefully moving an off topic question somewhere more appropriate? There are quite a few "off-topic" questions on Stack Overflow that are flagged, stopped, voted down, and so on, which nevertheless have very useful information for me. Some have solved problems for me and I would even want to add a comment or reply. Is there a graceful way to migrate these threads somewhere else so that they may continue to live, without cluttering up the main Stack Overflow site with "off-topic" questions? yes, there are places where they can 'continue to live', your PC for example. Or if you prefer, you can put them in another website (giving proper attribution). Remember that "user contributions are licensed under cc-wiki with attribution required" Do you mean new questions or old? I ask because old questions get the historical lock. New questions l ike this are really bad for the site as a whole as described Stack Overflow: Where We Hate Fun We already have a feature known as migration. 3k users can vote to migrate offtopic posts to a certain subset of all the sites. Diamond moderators can migrate to any site. Just flag the post with a custom flag, saying "migrate to X.stackexchange.com". (List of all sites here) BUT: This doesn't mean that there is always a site to migrate to. The network clearly doesn't cover all topics, so many questions may just be off topic--no migration needed. If the question is a programming question and off topic on SO (and Programmers), then it is mort probably just off topic. SO+Programmers don't handle all programming questions, it is restricted by the faq. This applies to any destination migration site. Many a time, mods have to discuss with the mods of the destination site before migrating--so a question may not be migrated due to the site scope. For example, this may be on-topic on Gaming, but I highly doubt it. We do not migrate crap. Some questions may be on-topic elsewhere, but if they're not too good they don't get migrated. For example, this may be on topic for secutiry.se--but it is not-constructive, and thus won't be migrated. There are also questions which would be tolerated if asked on the destination site directly, would probably not be migrated if asked elsewhere. Like this one. Maybe on topic for our Unix&Linux site, but not too good. We also (generally) don't migrate old stuff Also, when in doubt, the Moderators ping the other site's moderators and ask them if they even want the question. Users should not cross post a post to multiple sites.
STACK_EXCHANGE
Insufficient prvileges for Revoke-AzureADUserAllRefreshToken I am trying to revoke the refresh tokens of a specific user (my own) in AzureAD to force a completely new logon to an applicaiton. As there is no UI option for this in the Azure Portal (there actually is -> see in one of the anwers) I am using the 'Windows Terminal's 'Azure Cloud Shell' option as follows directly from the built-in Azure Cloud shell: Connect-AzureAD PS /home/...> Revoke-AzureADUserAllRefreshToken -ObjectId "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" The GUID I pass in the parameteter is the object ID of my user. Unfortunately this fails due to a permission issue: Revoke-AzureADUserAllRefreshToken: Error occurred while executing RevokeUserAllRefreshTokens Code: Authorization_RequestDenied Message: Access to invalidate refresh tokens operation is denied. RequestId: fd5f5256-3909-46af-b709-8068e0744f25 DateTimeStamp: Mon, 09 Aug 2021 16:56:28 GMT HttpStatusCode: Forbidden HttpStatusDescription: Forbidden HttpResponseStatus: Completed If I try to execute the same in the Cloud Shell within the azure portal, the result is the same. If I use a 'classic' PowerShell, then it works. So apparently something is missing with the authentication of the Cloud Shell. When I log in I get to select the right tenant, and my read access e.g. to the user list works perfectly. I have no more clues what I would be missing: I am Owner of the subscription in the azure role assignments I do have the Global Administrator role assigned in AzureAD Is there some special command to 'elevate' the permissions? I tried to reproduce the issue on my Azure AD tenant , but unfortunately I didn’t receive the error you are getting . Note: Make sure you connect with AD with your Global Admin account i.e<EMAIL_ADDRESS>or<EMAIL_ADDRESS>so that you see the correct details in every column in the above red box. Other options : From Portal you can go to the user profile and click on revoke sessions . Using Graph Explorer you can revoke signin Sessions. Post https://graph.microsoft.com/v1.0//users/UserObjectID/revokeSignInSessions Reference: user: revokeSignInSessions - Microsoft Graph v1.0 | Microsoft Docs Thank you AnsumanBal-MT for the detailed answer. The "Revoke Sessions" button is a very good hint, I did not notice it so far, as I was so much focused on getting the CLI working. Logging in with the same admin user pressing this button was successful - so my user seems to have the permissions. Coming back to the terminal based access, when executing Connect-AzureAD in my case it does not give any output. Also if I try LogLevel info it does not write any log file, and using the Confirm option also did not give any prompt. I was trying this command in the azure portal's buil-in cloud shall, as well as with Windows Terminal's 'Azure Cloud Shell' option. I now installed the AzureAD cmdlet package fore classic PowerShell and use it there -> there I do get the expected output and also log files. I can confirm that that the logon users is correct, and the revoke works from there. Glad to hear it that it worked ! yes you are correct i have provided it for AzureAD cmdlet for classic powershell ..
STACK_EXCHANGE
On 16 March 2020, an abrupt change entered my daily life, as it did for many others in the United Kingdom. All of a sudden, instead of working in a laboratory fabricating microfluidic devices, operating single-molecule detection setups, and preparing for my weekly undergraduate teaching, I found myself scrambling to bag up everything I might need from my office so that I could be productive from home for an indefinite period of time. It seemed a bit surreal, only having been back in the country for 2 weeks after attending the Biophysical Society meeting in San Diego, California, and taking a short holiday in the United States to visit family, to suddenly be leaving again. However, as we all did, I packed up my things, and away I went. Three days later, I found myself shackled in the countryside of the English county of Hampshire, attempting to find a new way to progress my PhD without any ability to gather wet-lab data. No longer could I spend every day in the lab, doing experiments to study the molecular determinants of protein phase separation. I would have to become more creative about how to spend my time. Still, due to the rich nature of my scientific field, I found a way. Instead of pipetting solutions, I now spend my mornings pouring a second cup of coffee and reading a backlog of articles that I said I would come back to. These range from those directly implicated in my research in phase separation to more distant articles, such as how high-energy impacts from meteorites could have been the catalyst to forming early biologic covalent bonds. Instead of operating microfluidic devices, I find myself analyzing a copious amount of data, writing, editing, making figures, and rewriting, trying to piece together coherent stories to share with the scientific community. Instead of instructing undergraduates, I find myself engrossed in online trainings of how to fit reaction data and process biological images. Isolation has afforded me much time to reflect, particularly on why I decided to go into biophysics for my PhD, and specifically how lucky I am to be in this field at this chaotic time. It took something quite drastic for me to rise above the hustle and bustle of daily life in academia and sit back and think about the greater purpose of studying biophysics. For me, biophysics represents not a single field but a scientific way of thinking that uses the interwoven nature of all areas of science. It recognizes the intricate complexities of the natural world that extend beyond the pigeonholes of strictly defined disciplines. It is a field in which someone with a background in chemistry, such as myself, can easily branch out to learn about the in-depth fluid dynamics of proteins in the cytoplasm, and at the same time, learn lessons from the physics of polymer blends to better understand cohesive forces between biomolecules. This lack of boundaries comes to light for me each day, when I think about problems related to the expression of a chimeric protein, aimed at studying the spread of aggregates between cells at one moment, while the next moment, I am designing microfluidic devices for assaying the most fundamental thermodynamic properties of biochemical systems. Later on, I can be found developing models for how the translational friction coefficients of protein assemblies change during growth for different assembly geometries. Needless to say, one doesn't have to be locked away in the countryside to realize the interdisciplinary nature of their field, but it certainly does afford one ample time for such contemplation. After this time in isolation, I see profound changes happening in the way I am conducting my PhD. It is too easy to be bogged down in a single niche subfield when scrambling to complete a thesis, and I'm not sure it affords one the best preparation for life after graduation. Instead, I'll continue my isolation practices of reading diverse topics, planning experiments, and spending time to understand theories that are not only scientifically interesting but that also teach me new information and techniques to add to my scientific repertoire. At the end of the day, a PhD is about learning as much as you can, pushing the boundaries for continued knowledge gathering and improvement. Those of us in biophysics should consider ourselves lucky to not have strict boundaries, to be able to pursue vastly different realms of science under a central umbrella, and to never forget to keep branching out.
OPCFW_CODE
I agree with Karen: due to the sensitive info and the Summon API terms of service it's important to avoid exposing your credentials - same reason nobody probably has or will offer a public proxy to the Summon API as mentioned in the embedded email from Oct. 26. On the other hand, you can create a local proxy/JSONP web service in your language of choice and call it from JS - taking care to try and limit access to your service to your own JS files, etc. I can share our (nclive.org) PHP Summon API caller function (if PHP is a language you use), but it'll be better in a week or so. Still missing code comments, special char. escaping, etc. It just returns the native Summon format (change to XML or JSON) so one would need to add the GET parts and having it return JSONP with a JSON header, etc. to turn it into a local JSONP web service that talks to Summon behind the scenes. In the meantime, maybe someone else on this list has a more ready-to-share On Mon, Nov 3, 2014 at 9:05 AM, Karen Coombs <[log in to unmask]> > I don't know what the Summon API uses to authenticate clients. It looks > from the Python code like a key and secret is involved. You should be care > makes them available for anyone copy and use. > On Sun, Nov 2, 2014 at 4:12 PM, Sara Amato <[log in to unmask]> wrote: > > they can be constructed to use jsonp and avoid cross domain problems > > Subject: > > Re: Q: Summon API Service? > > From: > > Doug Chestnut <[log in to unmask]> > > Reply-To: > > Code for Libraries <[log in to unmask]> > > Date: > > Wed, 27 Oct 2010 11:56:04 -0400 > > Content-Type: > > text/plain > > Parts/Attachments: > > text/plain (45 lines) > > Reply > > If it helps, here are a few lines in python that I use to make summon > > queries: > > def summonMkHeaders(querystring): > > summonAccessID = 'yourIDhere' > > summonSecretKey = 'yourSecretHere' > > summonAccept = "application/json" > > summonThedate = datetime.utcnow().strftime("%a, %d %b %Y > > %H:%M:%S GMT") > > summonQS = "&".join(sorted(querystring.split('&'))) > > summonQS = urllib.unquote_plus(summonQS) > > summonIdString = summonAccept + "\n" + summonThedate + > > "\n" + summonHost + "\n" + summonPath + "\n" + summonQS + "\n" > > summonDigest = > > base64.encodestring(hmac.new(summonSecretKey, unicode(summonIdString), > > hashlib.sha1).digest()) > > summonAuthstring = "Summon "+summonAccessID+';'+summonDigest > > summonAuthstring = summonAuthstring.replace('\n','') > > return > > --Doug > > On Tue, Oct 26, 2010 at 6:46 PM, Godmar Back <[log in to unmask]> wrote: > > > Hi, > > > > > > Unlike Link/360, Serials Solution's Summon API is extremely cumbersome > > > use - requiring, for instance, that requests be digitally signed. (*) > > > > > > Has anybody developed a proxy server for Summon that makes its API > > > (e.g. receives requests, signs them, forwards them to Summon, and > > the > > > result back to a HTTP client?) > > > > > > Serials Solutions publishes some PHP5 and Ruby sample code in two API > > > libraries (**), but these don't appear to be fully fledged nor > > > easy-to-install solutions. (Easy to install here is defined as an > > average > > > systems librarian can download them, provide the API key, and have a > > running > > > solution in less time than it takes to install Wordpress.) > > > > > > Thanks! > > > > > > - Godmar > > > > > > (*) http://api.summon.serialssolutions.com/help/api/authentication > > > (**) http://api.summon.serialssolutions.com/help/api/code > > > nitaro74 (at) gmail (dot) com "Hope always, expect never."
OPCFW_CODE
The plastic surgery face database is a real world database that contains 1800 pre and post surgery images pertaining to 900 subjects. For each individual, there are two frontal face images with proper illumination and neutral expression: the first is taken before surgery and the second is taken after surgery. The database contains 519 image pairs corresponding to local surgeries and 381 cases of global surgery (e.g., skin peeling and face lift). The details of the database and performance evaluation of several well known face recognition algorithms is available in the paper mentioned below. The list of URLs is compiled in a text file along with a tool to download the images present at these URLs. The tool will download the images and store them at the specified location. - Text file containing the URLs (7KB) (CRC32: D132C4A2, MD5: 0FBE3041D95FEE000CAF263048B52480, SHA-1: 325AAED6F31E4AE1471DE44F91A9BC2B63B0AAFD) - Tool to download the images (11KB) (CRC32: 2548A7C1, MD5: 79DF4CF8D12B724DCB6B54827C9C9738, SHA-1: 44731E292E05FFE683C2BB5BC1A67216ACADF2DC) - To obtain the password for the compressed file, email the duly filled license agreement to [email protected] with the subject line "License agreement for Plastic Surgery Face Database". NOTE: The license agreement has to be signed by someone having the legal authority to sign on behalf of the institute, such as the head of the institution or registrar. If a license agreement is signed by someone else, it will not be processed further. This database is available only for research and educational purpose and not for any commercial use. If you use the database in any publications or reports, you must refer to the following paper: - R. Singh, M. Vatsa, H.S. Bhatt, S. Bharadwaj, A. Noore and S.S. Nooreyezdan, Plastic Surgery: A New Dimension to Face Recognition, In IEEE Transaction on Information Forensics and Security, Vol. 5, No. 3, pp. 441-448, 2010. Disclaimer: The images in the plastic surgery database are downloaded from the internet and some of the subjects appear on different websites under different surgery labels. Therefore, this database may have some repetition of subjects across different types of surgeries. We have figured out multiple cases with such inconsistencies and provided an errata. If you come across some other cases as well, kindly report it to us. Please find the text file of errata. The images seperated by comma(,) represent the redundant entries in the database.
OPCFW_CODE
The Speccies - ZX Spectrum The creation process - Part 1 Last week, the game I was working on "The Speccies" was released. It was released as a free digital download for the ZX Spectrum and was also available as a limited cassette copy. You get more details about the game, including the download, from the Tardis Remakes website. Go download it now! I'll go into how I went about creating the physical copies of this game in a separate article. When I was first asked to do the graphics for this game away back in February, I knew nothing of the game it was based on - The Brainies/Tiny Skweeks, which was released on just about every format other than the ZX Spectrum - and at this point "our" version was still going to be called "The Brainies". I was keen to work on another Spectrum game that would actually be released, having been involved in a few others that just faded away. I looked at the graphics from the DOS version that were sent to me and Søren, the coder of the coder of the game, may now be surprised to know that my heart sank when I saw those graphics. I still had no idea how the game played and I thought the DOS graphics were terrible. Having a look online, the SNES and Amiga versions weren't much better. Now, top-down graphics can be quite difficult to do, but when you are limited to 2 colours and 16x16 pixels, things start to get a little tricky. Doing a top-down graphic of a Brainie walking wouldn't be very exciting both visually to the player and to me, the person creating the graphic. I decided that I'd have the Brainie roll. Yeah, roll. That'd be fun to do! The number of frames to animate a sprite on a ZX Spectrum can be limited due to the small amount of memory available. 4 frames would be no good, but if I could use 8 frames, then I thought it'd look good. Thankfully, I was told, "Sure, 8 frames is no problem. It could make things easier if everything had 8 frames." I'm paraphrasing, of course. Off I went, to create a sprite that I was determined to have something of my design....and this is what I came up with. I know, pretty rubbish, right? Not only was the character itself pretty uninspiring, but the frames for the rolling just weren't right either. I took a step back from the computer and went about sketching each frame on paper using pencil and then I'd film it using the Vine app on my iPhone. I would kill some time on a Friday night, at least. Not only was this an exercise in getting each of the 8 frames needed for rolling, but also to try out an slightly new design fora Brainie. When I create a character, it's all about proportion and I felt I now had that right. I still felt that there wasn't enough on the Brainie itself, so when it came to creating the character in pixels, I highlighted it's face. This was also a way of showing that it was rolling rather than it's eyes and feet looking like they are spinning. And after only 3 iterations (the 2nd displayed up there and the 1st is almost identical to the 2nd other than having no "brow") I got the character design right and it never changed. I had the sprites for moving down only, though, therefore looked into frames for it turning around - couldn't manage it due to the 16x16 pixel limit - always facing down but rolling backwards when going up and always facing but rolling sideways left and right - again the 16x16 pixel limit proved this impossible. I even considered making the character smaller by 1 pixel an all sides, but then it actually lost character, so quickly abandoned that idea. In the end, we compromised. When he was static, he always faced you and selecting which way to move, he'd face that direction and roll. I'd add more facial expressions and movements to it later on in development, but I was feeling good. I was also starting to get into the game, especially since it was similar to a game I'd just been played and loved on my iPhone, "Squarescape". I had got, what I thought, was an aces animated character that really looked like it was rolling properly and I assumed the difficult part was over. There wasn't much else to be animated and surely the other sprites/tiles would be relatively easy. How wrong was I! It turns out this graphic was one of the easiest.
OPCFW_CODE
So apparently it's illegal to talk about technology difficulty here? I tried creating a question asking what work is involved in creating a facial recognition app and got recommendations on my life and how to spend my free time... I want to talk about what kind of libraries are available to do a certain thing within Android and what kind of difficulty a task is and I get mundane irrelevant answers ending in my thread being closed? I guess here's another Exchange group that should be forgotten. Out of 5 answers, not a single one was relevant to my question. It entirely is possible your question is offtopic here. Can you post it here so we can try to help you? Possible duplicate of Green fields, blue skies, and the white board - what is too broad? @MetaFight for your convenience here is a screen shot of deleted question @Snowman Nope.. Q & A, of the type that we practice here at Stack Exchange, rests on a few fundamental principles and assumptions: Each site has a specific subject matter area. Questions asked on Stack Exchange must fit the subject matter of the site you post them on. The questions you ask must have a well-defined scope; that is, they must be specific enough to be answerable. What that means in practice is that you can't ask questions that solicit opinions, ask for a list of things, make product recommendations, or are too broad. There are very good reasons why we follow these principles. If you've ever tried to get an answer to one of your questions in a forum environment, you already know why we have these conditions: it's very nearly impossible to get a decent answer on a forum. In short, forums suck. So we do everything we can to avoid those forum behaviors that prevent people from getting good answers to their questions. This reduces the noise and is more attractive to those subject matter experts who are here to provide answers to your specific questions. If you are not aware of these principles, or fail to follow them for whatever reason, you're going to have a very hard time participating anywhere on Stack Exchange. I followed said principles. If you had, your question would not have been closed. Yes, that's why this is such a weird thing for the mods to do. I guess I can't expect them to do a good job nowadays. @insidesin: You sound like a criminal who was caught red-handed with a stolen wallet, trying to tell the police that the guy just gave him the wallet. Your questions were not OK for this site, period. We do not allow questions that ask "what libraries are available" for something. We do not allow allow open-ended questions like "how to create facial recognition software". You can try to pretend that your questions are on-topic for the site, but our rules are quite clear that they are not! @insidesin: Again, this isn't your own personal soapbox. If you want to do that, take it somewhere else. @insidesin: "I didn't ask what libraries were available. I asked if there were libraries to make it easier." Same difference: you were asking for libraries. That's not an acceptable question for this site. @NicolBolas I wasn't asking for libraries. I was asking if there were libraries. It's a yes or no answer. @RobertHarvey If I want to do what? Ask relevant questions to do with software technologies? @insidesin consider giving a read to Question closed because yes/no answer @gnat That wasn't the question of my post though. That was just some idea or direction in how I want it answered. @insidesin: "that's why this is such a weird thing for the mods to do." – No moderators were involved in either the closure or the deletion of your question. Please, get your facts right. @JörgWMittag sorry, accidental forum expression. It's unfortunate that you aren't familiar with forums either. What you were asking for is a topic where a useful answer could fill a whole book. Such questions are usually considered as too broad for the Q&A format of this site. Note that facial recognition is still a topic of scientific research. You argued it is not too broad, since you only want to know how much work is involved. However, that is nothing strangers on the internet can tell you, because we do not know you, your background, what exactly the features are you imagine for your program, or the quality of the facial regcognition you expect. Each of these details can easily make a difference factor of 20 or more in the resulting effort. So even if your question would not have been closed as "too broad", it should have been closed as "primarily opionated". Moreover, it is not "illegal" to talk about technology here. Almost any question on this site has a specific technological context. However, we do not give any technology or library or tool recommendations, and questions asking for such a recommendation are typically closed quickly by the community, too. See also: Why was my question closed or down voted? here is the link (the question is deleted but you can access it with your rep level): What work is involved in creating a mobile filter/face-swap style program? @gnat: thanks, I should really learn how to use the query features of this site to find such a question by myself :-) I didn't query - just checked 10K tools page and found it in recent deleted questions How can that fill a whole book? It's asking about the technology of one very specific method. I guess I should just face the fact that people don't like advanced topics here. @insidesin: http://www.springer.com/in/book/9780857299314 http://www.intechopen.com/books/face_recognition @DocBrown https://www.amazon.com/How-Spot-Liar-People-Truth/dp/1564148408 @DocBrown https://www.amazon.com/Liespotting-Proven-Techniques-Detect-Deception/dp/0312611730/ Boy, being useless is fun! @insidesin: Note that meta is here to give you the ability to provide feedback or get answers to your questions about the SE platform. It's not your personal soapbox; rants are not really welcome here. @insidesin: "Boy, being useless is fun!" He's not being useless. You asked how facial recognition could require a whole book. He demonstrated how it can by showing you several books about facial recognition. You lost the argument. @insidesin: kid, if you don't like what is on- or off-topic on this site, better ask your questions somewhere else. @RobertHarvey Good to know, when I start ranting you can shut me up. Til then it'd be nice if you stopped spitting irrelevance. @NicolBolas There was no argument. Everything can fill up a whole book, now if all you're going to do is be facetious, please stop and refrain from wasting my time. @DocBrown "kid" ? I'd love for this site to be worth something, but a simple test has shown that it unfortunately is not.
STACK_EXCHANGE
Why register your printer? These games are not shown due to an attack on the machine at the time reported by Security in 2012. Both phones are expected to be more popular than table and slot games on various sites. To download the Telltale game, bring your six-part gamble and run the desktop application. We find the audio quality to be the performance to handle most applications well. Maybe it’s just the slots that are loose or tight that make the audio quality ridiculous. One of the cases when video poker slots are bonus rounds. Bootcamp comes with hardware video acceleration and our destination is London this morning. 14.13 lots of internal memory is not a good enough reason to delay the hardware further. No one wants to scribble a little more while still being pretty soft. As for performance, the R705 takes a pure video playback engine and adds a lot of features. This information allows him 16 or 20 hours of standard definition video on other machines. 17:36 We can see the extended information, but everyone wants the battery life to be short as well. We plead and plead, but the creators of this system seem to have found information that proves otherwise. They are currently compatible with our Mac Pro test system, but that’s not all. But unlike the Dell Venue 8 Pro, for example, an accelerometer-protected hard drive for security-conscious users. Apple Retina Display Macbook Pro has a different four-way controller depending on who you ask. This happened on the previous generation 27″ iMac, and the same thing happened to the iMac with Retina display. As Mrijaj says, tight lock screens, for example, have this rich color rendering. I actually declined both offers, but the screen refresh rate increased. An additional advantage is that Google works this way or at least 7 minutes or his two lenses placed diagonally to each other, three sticks and a small notch iPhone 12.7 either with a jackpot . The condition can be cut considerably with one from 2 months ago. The downside is that many of them can be played in demo mode. However, a Bluetooth connection can match sound effects and flashing lights or noises. Unsurprisingly, the narration is by Academy’s Alex Pritchard, with the rest on loan to Hoffenheim.Next, the paytable section lists Atlantic City as the classic controller. So try to end the overhead to keep the old slots in the new model. For example, the higher the slot machine success rate, the higher the payline. Halo69 Slot kembali kebanyakan orang akan. Dell XPS akan cocok dengan. Koalisi pemalsuan di mengatakan Las Vegas sangat diatur untuk menjamin tingkat tertentu. 11: 07 di katakan beberapa kompak atau bahkan sementara kita tidak tahu. Pelanggan kasino California, ini adalah beberapa saluran listrik pertama yang cocok dengan ulasan kami tentang ini. Of only two improvements, it represents a massive upgrade to the phone. Players first start broadcasting to each other via Bluetooth, freeing up USB ports.
OPCFW_CODE
The master_2020 version can only be loaded over a database that was previously updated to master_2019.2. So please make sure you are running on the most current version of master_2019.2 before you upgrade to master_2020. Note that an error will be reported if an attempt is made to update from a pervious version. Needless to say, master_2020 is a major update for SimpleInvoices. For one thing, it requires that you are running on a version of PHP 7.4 or greater. Although it might work on previous versions, there is no guarantee that will always be the case. The benefit of of PHP 7.4 is faster processing and greater security. From a development perspective it provides better features to use in development of applications such as SimpleInvoices. The most major change made in the master_2020 is the removal of Zend Framework 1 libraries. Areas affected by this change include: - Session handling - Access Control List (ACL) logic that determines what you can access based on the user type you are logged in as - Formatting libraries used for numbers, currencies and dates - Application logging - Access to configuration file settings Additionally, this update uses Composer and Node for vendor and jQuery library maintenance. This change helps automate the process of keeping support libraries up to date. master_2020 has an updated report generation system. In previous versions, only the Statement of Invoices report presented buttons to Print, Export to PDF, XLS or DOC files and an Email option. The new report framework supports these options for all reports. Also, the presentation of the reports was standardized so the look of all reports is the same. There are two configuration files in master_2019.2: config.php & custom.config.php. In master_2020 these are changed to the .ini files: config.ini & custom.config.ini. The “config” file contains all key/value settings that are needed to configure SimpleInvoices and is maintained as part of the SimpleInvoices code. The “custom.config” file is the runtime version that contains the same “config” file keys but with values set for your implementation (DB name, user & password, etc). A process is included in the master_2020 update to convert your “custom.config.php” file to the new “custom.config.ini” file format. Update to master_2020 from master_2019.2 - Save a copy of your “custom.config.php” file located in the config directory for use later in this process. - Export your full database using phpMyAdmin or whatever administration tool you use. - Backup your full SimpleInvoices implementation. Make a .zip or .gzip copy of your full SimpleInvoices directory path. Include the database extract from step 2 in the backup file. - Save the backup file in a directory separate from your SimpleInvoices directories. - If you have developed your own extension or custom hooks, they will be included in the backup file. - Download the “master_2020” version of SimpleInvoices. - Delete your SimpleInvoices root directory. This includes all sub-directories within it. - Extract the content of the downloaded “simpleinvoices-master_2020.zip” file into the “document root” directory of your webserver. - Rename the new directory, “simpleinvoices-master_2020” to the name of the SimpleInvoices directory you deleted in step 7. - Copy the “custom.config.php” file saved in step 1 into your SimpleInvoices “config” directory. - Using a text editor (notepad, Notepad++, etc.), open the file, “si2020Converter.php” file located in your SimpleInvoices root directory, and change the setting of the “$secure” variable on line 2, from “true” to “false“. Save the file. - In your browser run this program. For example, if your root directory is “simple_invoices” then in the browser address line enter, “simple_invoices/si2020Converter.php“. If this runs successfully, a green result line will be displayed. This program makes the new “custom.config.ini” file from the content of the old “custom.config.php” file. - Now in your text editor used in step 11, change the “$secure” setting back to “true” and save the file. You can also delete the old “custom.config.php” file from the “config” directory. Proceed to the, First Use Of Update, topic instructions below. - Select the Backup Database option on the Settings tab and follow the instructions to backup your database. This will store the backup in the tmp/database_backup directory. You can leave the backup there. - Rename your SimpleInvoices directory to something like, simple_invoices_yyyymmdd_b4_update. The rename moves all content of your current SimpleInvoices directories is preserved for easy recovery if needed. Update Installation Instructions Follow these steps to complete your update: - Make sure you do what it says in the Backup First topic. - Recreate the directory that your current SimpleInvoices was installed in that was renamed from in the Backup First step. We will call this the SimpleInvoices directory. - In your browser, download the “master_2019.2” version for PHP 7.2 and greater, or the “master“ version for PHP 5.6, 7.0 or 7.1 from the Clone or Download button on that page. - Unzip the download file. It will create a directory named the same as the zip file (assuming you didn’t rename it); typically, simpleinvoices-master. - Copy the content of the directory created by the unzip process into the directory created in step 2. - Copy the config/custom.config.php file from your previous SimpleInvoices directory and save it in the config/ directory of the new SimpleInvoice installation directory. Changes to the new version of the config/config.php file will be automatically added to the new copy of the config/custom.config.php file. - If you have your own business invoice template, copy your company logos from the backup template/invoices/logos directory to the updated install template/invoices/logos directory. Next copy the directory your business template is in, from the template/invoices directory to the template/invoices directory. Proceed to the next topic, First Use Of Update. First Use Of Update - Access the updated SimpleInvoices site. If authentication is enabled, log in as your normal administrative user. - If there are NO database updates (aka patches) to perform, just start using SimpleInvoices. - If there ARE database updates, you have two quick actions to perform. - You will be on the patch page at this point. This page lists all SimpleInvoices patches; both applied and unapplied. Scroll down the list to see what unapplied patches are pending. They are at the bottom of the list. Scroll back to the top of the list and select the button to apply the patches. - You will now be on the page that lists all the patches, showing that they have all been applied. - Click the button on the applied patches review page and begin using your updated SimpleInvoices. NOTE: If the patch process reports an error for foreign key update, refer to the Foreign Key Update error section below. If You Have Special Code - Custom Hooks – These are changes made to the hooks.tpl file in the custom directory. You need to transfer these changes to the same file in the new installation. Verify the are current and work for the newly installed version. - Extensions – Extensions are the proper way to add new functionality to SimpleInvoices. You will need to copy the directory containing your extension to the extensions directory of the new install. You will then need to review your extension code to make sure it is current for any changes to the standard files that need to be incorporated into your extension file. Test your extension to make sure it functions correctly. - Changes to the standard code – Hopefully you kept copious notes and comments on these changes because you have to track them down and implement them in the new version. HOWEVER, when you incorporate it into the updated version do it as an Extension or via the Custom Hooks. Then your life will be more simple the next time you update. Test your changes and you are ready to us the updated version of SimpleInvoices. Unable to set Foreign Keys Error Handling One of the major changes with master-2019.2 is the implementation of foreign key support in the database. This replaces the partial support in the code in prior versions. If you want to know more about foreign key support, please refer to this topic in the How To … menu option. Foreign key support is implemented in patch #318. If you get the error, “Unable to set Foreign Keys,” the update process will stop after applying all patches up to #318 and will report pertinent error information in the tmp/log/php.log file. Look in this file to see what error(s) have been found. The first part of the error information is an explanation of what has been found. Here is the explanatory text: Unable to apply patch 318. Found foreign key table columns with values not in the reference table column. The following list shows what values in foreign key columns are missing from reference columns. There two ways to fix this situation. Either change the row columns to reference an existing record in the REFERENCE TABLE, or delete the rows that contain the invalid columns. To do this, the following example of the SQL statements to execute for the test case where the ‘cron_log’ table contains invalid values ‘2’ and ‘3’ in the ‘cron_id’ column. The SQL statements to consider using are: UPDATE si_cron_log SET cron_id = 6 WHERE cron_id IN (2,3); —- or —- DELETE FROM si_cron_log WHERE cron_id IN (2,3); The pertinent information to your system then follows in a table that displays all the information you need to correct the error. The following example shows a case where there are orphaned si_invoice_items table record(s) relative to the invoice_id column with a value of “1” that ties back to the id column of the si_invoices table. Here is the example of this: invoice_items invoice_id invoices id 1 Using this information, you can decide to perform an UPDATE or a DELETE to resolve the orphaned records after reviewing your database records. In this case, the likely decision is to delete the orphaned records from the si_invoice_items table. Using the DELETE example above, the SQL command you construct would be: DELETE FROM si_invoice_items WHERE invoice_id = 1 After resolving the foreign key errors, access your SI application again to complete the update process. Note that the table shown for the FOREIGN KEY TABLE column is “invoice_items” but the delete command references the “si_invoice_items” table. This is because the “si_” prefix is automatically added by the database SQL build logic and the application only knows the “invoice_items” part of the table name.
OPCFW_CODE
The Easiest Productivity Hack of All Time By Alan Henry / LifeHacker Getting stuff done is hard, especially if you are self-employed or need to do things for yourself that you usually put off, like paying bills. There always seems to be something else to do: a drawer that could be organized, a phone call to your sister or checking flight prices on a trip you have no intention of taking. Enter the Pomodoro Technique. This popular time-management method can help you power through distractions, hyper-focus and get things done in short bursts, while taking frequent breaks to come up for air and relax. Best of all, it’s easy. If you have a busy job where you’re expected to produce, it’s a great way to get through your tasks. Let’s break it down and see how you can apply it to your work. We’ve definitely discussed the Pomodoro Technique before. We gave a brief description of it a few years back, and highlighted its distraction-fighting, brain training benefits around the same time. You even voted it your favorite productivity method. However, we’ve never done a deep dive into how it works and how to get started with it. So let’s do that now. What is the Pomodoro Technique? The Pomodoro Technique was invented in the early 1990s by developer, entrepreneur, and author Francesco Cirillo. Cirillo named the system “Pomodoro” after the tomato-shaped timer he used to track his work as a university student. The methodology is simple: When faced with any large task or series of tasks, break the work down into short, timed intervals (called “Pomodoros”) that are spaced out by short breaks. This trains your brain to focus for short periods and helps you stay on top of deadlines or constantly-refilling inboxes. With time it can even help improve your attention span and concentration. Pomodoro is a cyclical system. You work in short sprints, which makes sure you’re consistently productive. You also get to take regular breaks that bolster your motivation and keep you creative. How the Pomodoro Technique works The Pomodoro Technique is probably one of the simplest productivity methods to implement. All you’ll need is a timer. Beyond that, there are no special apps, books, or tools required. Cirillo’s book, The Pomodoro Technique, is a helpful read, but Cirillo himself doesn’t hide the core of the method behind a purchase. Here’s how to get started with Pomodoro, in five steps: That “longer break” is usually on the order of 15-30 minutes, whatever it takes to make you feel recharged and ready to start another 25-minute work session. Repeat that process a few times over the course of a workday, and you actually get a lot accomplished -- and took plenty of breaks to grab a cup of coffee or refill your water bottle in the process. It’s important to note that a pomodoro is an indivisible unit of work -- that means if you’re distracted part-way by a coworker, meeting, or emergency, you either have to end the pomodoro there (saving your work and starting a new one later), or you have to postpone the distraction until the pomodoro is complete. If you can do the latter, Cirillo suggests the “inform, negotiate and call back” strategy: Of course, not every distraction is that simple, and some things demand immediate attention -- but not every distraction does. Sometimes it’s perfectly fine to tell your coworker “I’m in the middle of something right now, but can I get back to you in... ten minutes?” Doing so doesn’t just keep you in the groove, it also gives you control over your workday. How to get started with the Pomodoro Technique Since a timer is the only essential Pomodoro tool, you can get started with any phone with a timer app, a countdown clock, or even a plain old egg timer. Cirillo himself prefers a manual timer, and says winding one up “confirms your determination to work.” Even so, there are a number of Pomodoro apps that offer more features than a simple timer offers. Who the Pomodoro Technique works best for However, it’s also useful for people who don’t have such rigid goals or packages of work. Anyone else with an “inbox” or queue they have to work through can benefit as well. If you’re a system’s engineer with tickets to work, you can set a timer and start working through them until your timer goes off. Then it’s time for a break, after which you come back and pick up where you left off, or start a new batch of tickets. If you build things or work with your hands, the frequent breaks give you the opportunity to step back and review what you’re doing, think about your next steps, and make sure you don’t get exhausted. The system is remarkably adaptable to different kinds of work. Finally, it’s important to remember that Pomodoro is a productivity system -- not a set of shackles. If you’re making headway and the timer goes off, it’s OK to pause the timer, finish what you’re doing and then take a break. The goal is to help you get into the zone and focus -- but it’s also to remind you to come up for air. Regular breaks are important for your productivity. Also, keep in mind that Pomodoro is just one method, and it may or may not work for you. It’s flexible, but don’t try to shoehorn your work into it if it doesn’t fit. Productivity isn’t everything --it’s a means to an end, and a way to spend less time on what you have to do so you can put time to the things you want to do. If this method helps, go for it. If not, don’t force it. (at) mindpowernews.com / Privacy
OPCFW_CODE
How to build a hierarchical view of inherited classes in Python? This is a question I tried to avoid several times, but I finally couldn't escape the subject on a recent project. I tried various solutions and decided to use one of them and would like to share it with you. Many solutions on internet simply don't work and I think it could help people not very fluent with classes and metaclasses. I have hierarchy of classes, each with some class variables which I need to read when I instantiate objects. However, either these variables will be overwritten, or their name would be mangled if it has the form __variable. I can perfectly deal with the mangled variables, but I don't know, with an absolute certainty, which attribute I should look in the namespace of my object. Here are my definitions, including the class variables. class BasicObject(object): __attrs = 'size, quality' ... class BasicDBObject(BasicObject): __attrs = 'db, cursor' ... class DbObject(BasicDBObject): __attrs = 'base' ... class Splits(DbObject): __attrs = 'table' ... I'd like to collect all values stored in __attrs of each class when Instantiate the Splits class. The method __init__() is only defined in the class BasicObject and nowhere else. Though, I need to scan self.__dict__ for mangled __attrs attributes. Since other attributes have the pattern attrs in these objects, I can't filter out the dictionary for everything with the pattern __attrs in it ! Therefore, I need to collect the class hierarchy for my object, and search for the mangled attributes for all these classes. Hence, I will use a metaclass to catch each class which calls __new__() method which is being executed when a class definition is encountered when loading a module. By defining my own __new__() method in the base class, I'll be able to catch classes when each class is instantiated (instantiation of the class, not an object instantiation). Here is the code : import collections class BasicObject(object) : class __metaclass__(type) : __parents__ = collections.defaultdict(list) def __new__(cls, name, bases, dct) : klass = type.__new__(cls, name, bases, dct) mro = klass.mro() for base in mro[1:-1] : cls.__parents__[name] = mro[1] return klass def __init__(self, *args, **kargs) : """ Super class initializer. """ this_name = self.__class__.__name__ parents = self.__metaclass__.__parents__ hierarchy = [self.__class__] while this_name in parents : try : father = parents[this_name] this_name = father.__name__ hierarchy.append(father) except : break print(hierarchy) ... I could have access attributes using the class definition, but all these classes are defined in three different modules and the main one (init.py) doesn't know anything about the other modules. This code works well in Python 2.7 and should also work in Python 3.. However, Python 3. have some new features which may help write a simpler code for this kind of introspection, but I haven't had the time to investigate it in Python 3.0. I hope this short explanation and example will save some of your (precious) time :-) I think your answer should go in the.. answers. Yes, you're absolutely right ! But I don't know how to directly post an "answer" :-) Yes, the question is the answer; simply because I couldn't find anything other than the "Ask Question" button on the site. Did I miss something ?
STACK_EXCHANGE
Microsoft Visual Studio 2008 Sp2 Microsoft Visual Studio These are to be started with a different executable. We would love to hear from you! Experience new ways to collaborate with your team, improve and maintain your code, and work with your favorite repositories, among many other improvements. Note that your submission may not appear immediately on our site. If Visual Studio Professional or higher was already installed on the machine, LightSwitch would integrate into that. Consequently, one can install the Express editions side-by-side with other editions, unlike the other editions which update the same installation. Visual Studio Subscriptions. The integrated debugger works both as a source-level debugger and a machine-level debugger. Use the coupon code to avail the discount. Previously, a more feature restricted Standard edition was available. If you have any feedback, please tell us. Your message has been reported and will be reviewed by our staff. There is even a link to verify the installatio n oif dot net in one of the posts. To download Visual Studio for Mac, see visualstudio. Visual Studio includes a code editor supporting IntelliSense the code completion component as well as code refactoring. Its focus is the dedicated tester role. Hi Tariq, Could you please provide us with a screenshot to take a look? Some exclusions are often applied, so please check the coupon before you apply online or use it in store. Visual Studio 2008 Sp2 Download Do you have any video on how can i get the discount. Microsoft released Visual Studio. Any tools and programming languages that run inside the Visual Studio Shell integrated mode will run together with Visual Studio Standard and above if they are also installed on the same machine. Analysis Reporting Integration Notification. The parameters to the method are supplied at the Immediate window. It includes updates to unit testing and performance. Quick Search supports substring matches and camelCase searches. To download Microsoft Visual Studio Code, eric benet news for you see code. Microsoft Visual Studio Shell integrated mode Redistributable Package provides the foundation on which you can seamlessly integrate tools and programming languages within Visual Studio. Microsoft started development on the. It can produce both native code and managed code. This can rule out the possibility of corrupted user profile. It does not include support for development or authoring of tests. This section needs expansion. Filter on the process name explorer. Visual Studio Development. All languages are versions of Visual Studio, it has a cleaner interface and greater cohesiveness. Community developers as well as commercial developers can upload information about their extensions to Visual Studio. Use the coupon code online at checkout. Visual Studio System Requirements LightSwitch is included with Visual Studio Professional and higher. You can also find some huge discounts on Women's lingerie and stack one of the below codes with that. The various product editions of Visual Studio are created using the different AppIds. IntelliSense, debugging and deployment capabilities to build. In Visual Studio onwards, it can be made temporarily semi-transparent to see the code obstructed by it. Sort Date Most helpful Positive rating Negative rating. By late the first beta versions of. Be Agile, unlock collaboration and ship software faster. Considering it is still on it's first beta which I have on a laptop I don't see it being release this year. What do you need to know about free software? Administrator rights are required to install Visual Studio. Write your code fast Debug and diagnose with ease Test often, release with confidence Extend and customize to your liking Collaborate efficiently. No problem about the english. It is aimed for development of custom development environments, either for a specific language or a specific scenario. Pros programming with databases Cons none of course Summary none of course. Somasegar and hosted on events. For Hyper-V emulator support, A supported bit operating system is required. - Video maker windows 7 free download - Big book audio mp3 free download - Cricket games 2007 from ea sports com for free download - Sharayet movie free download - J moss v4 - Samba movie songs free download - Online education websites templates free download - Darlene zschech album - Western union translink software free download - Pawan kalyan premalo paddadu short film free download - Pool billard - Able full length movie - Auto tuner no - Missy elliott - get ur freak on mp3 free download - Namastey london background music free download
OPCFW_CODE
It’s getting rather cold in UK and I am dreading the winter and the darkness. However, on the positive side it is cozier to sit and code in front of the computer with a lovely cup of tea or coffee. And here is some code. I was asked on Twitter to post about adding contacts for Windows Store and Windows Phone when working on a Universal App, and frankly contacts and the various APIs around that confuses me (so I hope I got it right). This is one of the areas where we don’t have full convergence yet between the two targeted devices, some types are available for one platform (such as the ContactManager class which at the time of typing is only available for Windows Store). To add to the confusion there is the concept of a contact store, an in-app contacts keeper we could call it, which is available on Windows Phone only. After reading the documentation up and down I ended up with the code below for adding contacts to the People app (or hub as it can also be called) for Windows Phone and Windows Store. Whatever you want to do outside of the app container has to be done through either special permissions (- and capabilities declared) when they exist, or through a broker model that basically hands over the decision making over to the user. In the code example below the Store application creates a contact, then opens up a dialog with the details and the user can either take direct actions on the details (send an email for example), or add the contact (unless already added)- from where the user can find the contact details in the People hub/app. For Windows Phone the contact is added directly and can afterwards be accessed in the People hub. The app, if you are curious, is the app I’ve used for the last few Optical Character recognition blog posts and it simply takes an image, grabs the text and layout information and with some regex and layout information trickery (logic) creates a contact. Don’t forget to add Contact as a capability for Windows Phone in the manifest file BTW! if (Contact == null) return; var contact = new Contact FirstName = Contact.Name var homeEmail = new ContactEmail Address = Contact.Email, Kind = ContactEmailKind.Work var workPhone = new ContactPhone Number = Contact.PhoneNumber, Kind = ContactPhoneKind.Work ContactManager.ShowContactCard(contact, new Rect(), Placement.Above); var contactStore = await Windows.Phone.PersonalInformation.ContactStore.CreateOrOpenAsync( var contact = new StoredContact(contactStore); var contactDetails = await contact.GetPropertiesAsync(); For Windows Store there is also the CurrentPickerUI which (when used with a Contact contract) lets the user use your app to select contacts in a similar fashion as the Share Contract target and source works. Alright, let me know if I’ve missed something here or something, I can’t wait until Windows Store and Windows Phone become one and the APIs are a bit clearer in the way they work and what they do. Still love it though, the platform.
OPCFW_CODE
Greetings fellow GOSHers, My name is Gideon, and I was introduced to GOSH last year through AfricaOSH during my participation in the OpenFlexure Microscope workshop in Ghana. I am currently a student at Kwame Nkrumah University of Science & Technology in Kumasi, pursuing a degree in Biomedical Engineering. I am reaching out to fellow GOSH members for assistance with my final project. A little background on my project. The aging population in Ghana and Africa, coupled with the prevalence of conditions like ALS, Parkinson’s disease, and others affecting independent functioning among the elderly and people suffering from nervous system disorders, highlights the crucial need for assistive devices. The prolonged time taken to perform essential activities of daily living, particularly eating, underscores the necessity for designing an automated feeding system for the elderly and individuals with nervous system disorders. Additionally, the significant emigration of Ghanaian nurses, to America and Europe, who serve as primary caregivers for the elderly and people suffering from nervous system disorders, presents a substantial threat to their quality of life and independence. Hence, the urgency for developing assistive devices. I aim to create a portable device capable of scooping food from a bowl and transporting it to the user’s mouth without requiring physical contact with either the bowl or the device. I would greatly appreciate assistance on integrating microcontrollers into the hardware to achieve the intended functionality of the device. Guidance on suitable software for designing a model for the project would be invaluable as well. Your suggestions and contributions to this endeavor are highly appreciated. Thank you. Best regards, Gideon I love the intention with your project to use tech for meaningful practical purpose here. I work with #techForGood makers and volunteers to bring similar solutions for persons with disabilities here in Singapore, I find that every device / automation system we build eventually needs to be customized and personalized to the persons specific needs. So our approach is to design for one single use case instead of trying to design an automation system that could work for many. Following this approach seems strange to most people who think of factory production as a “default” and assume it’s more expensive. However, since we’re using open-source design, and consumer level production like 3d printing, laser-cutting, and hand-crafting (usually the best tech) we can make things that fit and work better than mass production of expensive customizable assistive devices. My recommendation is to invite the user and their caregivers into the design process of the assistive device. When we design iteratively with smaller and simpler little prototypes we often find the user needs are such that we don’t really need complex automation but something that can be self-maintained as well as it’s designed usage. A question we ask of complex electronic devices is: how does it affect the user when it breaks or stops working? Is the user able to self-fix? Or do they now rely on someone else? Is that okay? Can the caregiver handle the support? Bringing the intended user of the assistive device and the caregivers into the process allows for these questions to be asked and understood along with the development of prototypes. They don’t need to be design or tech people to share ideas and sketch out little drawings that make the prototyping process meaningful. That said, you’ll find more about the process and devices ive been working on here: Makerspace in a Library in Singapore While I don’t have a specific device that automates feeding, there are several designs of related devices we can suggest to the user, caregivers, and makers. Ni! Hi Gideon @Deonboachie It could be interesting for you to contact these folks, a makerspace/association that specializes in making devices for handicapped people: (website mostly in French, just use a automatic translator, and you can definitely write them in English) Wish you success with your project!
OPCFW_CODE
Last month, we announced the release of the new website of React-RxJS, our React bindings for RxJS. If you’ve ever had to integrate real-time data APIs with React, keep reading, this is the solution many of us have been waiting for. In this blog post, we will explain why we needed to bring reactivity to React through RxJS, and our thoughts on why -for the real-time data applications that we build at Adaptive- we can’t just use React as a state-management library. At least, not directly. I'll be looking forward to hearing your thoughts/feedback, find my contact details at the end of this article. Since it was open-sourced in 2013, React has become one of the most popular tools for building web applications. At Adaptive, we quickly realized its potential, and we became one of its early adopters. However, React’s API was still a bit rough around the edges, and it was not ready for handling domain state. As a result, different libraries were created to cover that gap. Redux became the most popular one, and a large ecosystem emerged around it. The Redux value proposition was very appealing. It proposed a simple mental model that seemingly provided code-consistency, maintainability, predictability and great development tools. At Adaptive, we adopted Redux, and we accepted its shortcomings as necessary trade-offs. However, React has improved a lot since then: a stable context API, React Fiber, Fragments, Error Boundaries, Hooks, Suspense… And there is another set of great improvements that are about to land with React Concurrent Mode. All these improvements make Redux obsolete. On the one hand, React now has a much better API for dealing with domain state (mainly thanks to Hooks and Context), on the other hand, Redux has now become an obstacle for leveraging some of the latest React improvements. React’s state-management is not reactive, though, and that can be a challenge when it comes to integrating real-time data APIs with React. However, due to the latest React improvements, it’s now possible to have a set of bindings that seemingly integrate Observables with React, and that is exactly what React-RxJS is about. React-RxJS goal is to bring reactivity to React. Let’s see why this is highly desirable for real-time data Web Applications. Why did we start using Redux? Before we explain why we have decided to stop using Redux, we must understand why we started using it in the first place. React shipped its first stable Context API on version 16.3.0, which means that for the first 5 years React didn’t have a stable API for sharing state. In fact, during the early years, React was presented as a tool for enabling Flux. During that time, Redux became one of the most popular tools for managing the state of React applications. Redux was so predominant that even React-Apollo used it internally on its first stable version. Probably what enabled Redux popularity was its unopinionated API, which allows to easily enhance the Redux store. In other words: Redux popularity was enabled by its middlewares. Even the Redux devtools are a store enhancer! Thanks to middlewares like Redux-Saga and Redux-Observable, many of us saw in Redux not only a library to handle state, but a means for orchestrating side-effects. At Adaptive, we specialize in real-time data applications, and most of our APIs are push-based. Therefore, Observables are a central primitive for us. So much so, that you could say that Reactive Extensions are a lingua franca inside Adaptive. When React came out, it was very challenging to integrate RxJS directly with React. React was essentially pull-based, and that presented a significant impedance mismatch with RxJS observables, which are push-based. In that context, Redux-Observable looked like the right tool to integrate our APIs with the Redux store that fed the state of a React App. However, after having used this tech-stack for the last years, we’ve learned that it can have a significant impact on performance, scalability and maintainability for the kind of Web Applications that we build. Why did we decide to stop using Redux? Some Web Applications have the luxury of interacting with APIs that spoon feed them with the exact data that they need for a particular view, like GraphQL APIs. Those kinds of APIs have many advantages, but they require some extra-processing and caching on the Back End, and they tend to produce relatively large payloads. Unfortunately, for most of the products that we build, we can’t afford the luxury of working with those kinds of APIs. Our APIs send frequent updates with small payloads, and most of these payloads consist of deltas. In other words: in order to keep our BE services highly efficient, the client is expected to reactively derive a lot of state. Redux is not ideal for this, mainly because it’s not reactive. Redux treats our state as a blackbox, without any understanding of its relations. We can “slice” our reducers as much as we want, but all that Redux sees is one opaque reducer. Also, it often happens that after we’ve broken down our reducers into small slices, we run into a situation where the reducer from one slice depends on the value of another slice. There are different “solutions'' for addressing this common problem, of course. However, they are all hacky, suboptimal and not very modular. Ultimately, the problem is that since Redux doesn’t understand the hierarchy of our state, it can’t help us at optimally propagating changes. Every time an action gets dispatched, Redux will try to notify all the subscribers. However, it often happens that while the store has started notifying its subscribers, one of them dispatches a new action and that forces Redux to restart the notification process. In fact, the subscriber doesn’t even know if the part of the state that they are interested in still exists. That’s why react-redux has to find creative ways to work around problems like “state props” and “zombie children”. The fact that Redux “chaotically” notifies all the subscribers upon dispatch is problematic. However, there is yet a larger issue: to prevent unnecessary re-renders, all subscribers must evaluate a selector function and compare the resulting value with the previous computation. If this selector just reads a property of the state, then things work fine. However, applications that derive and denormalize significant amounts of data must use tools that help at memoizing selectors, so that they can avoid unnecessary recomputations and unwanted re-renders. However, these tools are quite limited and inefficient. Another important problem when working with Redux is code navigability. This can be especially problematic when using Redux-Observable, because it’s very tempting to make transformations in the epics and let reducers become glorified setters, with actions that read like “SET_SOME_VALUE”. When this happens, then understanding what’s dispatching those actions and why becomes really challenging as the project grows. Other issues when working with Redux are that it makes code-splitting a very tedious endeavour, it doesn’t provide any means for integrating data-fetching with React.Suspense and the support that provides with React’s error boundaries is quite limited. Also, it’s quite likely that when React Concurrent Mode gets released, react-redux will have to choose between suffering from tearing issues or having to pay a toll on performance. Why not use React as a state-management library? React has improved a great deal during these last years. However, React still treats its state as a black box. It doesn’t understand its relations. In other words: React is not Reactive. Generally speaking, that’s not a problem when building reusable components. However, when it comes to building components that are tightly coupled to the domain state, especially when this state is exposed through a push API, then using React as your state management library may not be ideal. Observables would be a much better fit for that.Wouldn’t it be nice to have a way to integrate those domain observables with React easily? Well, that is exactly what React-RxJS accomplishes. React-RxJS leverages the latest improvements of React so that we can easily integrate observables that contain domain state with React. Doing so has the following benefits: - Updates are only propagated to those observables that care about the update, so we are automatically avoiding unnecessary recomputations and re-renders without having to memoize selectors. - Since we don’t have a central store or a central dispatcher, we get code-splitting out of the box. - Much better code navigability, as we can easily navigate the chain of Observables that define a particular piece of state. - Much less boilerplate. Also, since the hooks produced by react-rxjs are automatically integrated with React Suspense and error-boundaries, we can get rid of all the ceremonies that are needed with Redux for dealing with loading states and error propagation. Therefore, producing code that’s a lot more declarative, while also producing smaller bundle-sizes. At Adaptive, we have been using these bindings for the last few months in production, and based on the performance gains that we are experiencing and on the reduction of boilerplate that React-RxJS has enabled, we can confidently recommend its usage. Also, one nice thing to be aware of about these bindings is that since React-RxJS doesn’t want to own your whole state, it can easily integrate into your current project and grow organically. This is particularly relevant for those React projects that were started years ago with Redux and Redux-Observable. React-RxJS makes React Reactive. In the sense that enables handling the domain-level state of a React app using RxJS streams. So, if you are looking for a modular, performant and scalable state-management solution for React, especially if you are working with push-based APIs, then you should give it a shot. Victor Oliva: co-creator of these bindings, he has helped at shaping the API, fixing bugs, coming up with great ideas, improving the documentation, etc. Bhavesh Desai: for believing in this idea since the very beginning. He was the first one who thought that we should try using RxJS directly with React and he promoted the first ideas and experiments. Riko Eksteen: for his invaluable help at improving the docs, providing feedback on the API, improving the typings and the CI, and for always being there ready to help. Ed Clayforth-Carr: for coming up with this awesome logo. Josep M. Sobrepere Front End Architect, Adaptive Financial Consulting
OPCFW_CODE
My irritation with fabois and fanboishness knows no bounds. In this occasional series of posts, let's examine some fanboi falsehoods and technological tropes -- in The Long View. Fanbois. These people have an intense desire to evangelize their chosen technology and convert users of competing products to their One True Way. Whether it's Mac fanbois mocking Windows users, or iPhone fanbois taunting Android wielders, their behavior is childish, cultish, and frankly a little disturbing. Here's a typical recent comment, from somebody taking the pen-name of La Jollan: Microsoft has been successful at spreading the meme that Windows only seems more vulnerable because hackers tend to target it more because of its ubiquity. But Windows is fundamentally flawed by being based on a system for which security was an after-thought. Ah, this old chestnut: Mac OS is inherently more secure than Windows. The comment could be straight from the Cupertino PR talking-points playbook. It deals up-front with the obvious counter-argument -- that Windows exploits are more prevalent because Windows' bigger installed base makes it a juicier target. The thing is, I see no evidence that Windows and Mac OS are significantly different in the security of their code. I also see no evidence that Windows and Mac OS get significantly different patch volumes. In fact one could argue -- if one were so inclined -- that, because people are trying harder to find vulnerabilities in Windows, the security of Mac OS code is actually worse. In other words, similar patch volumes mean that the OS that's used more would be more secure. (Such a conclusion is unproven, however.) I do perceive that there's a mature, systematic patching program at Microsoft's MSRC, which is in contrast to the more secretive program at Apple -- giving at least the impression that things are a little more ad hoc in Cupertino than Redmond. I also perceive that the vast majority of the critical vulnerabilities discovered in Windows are due to legacy code. The recent .LNK/shotcut vulnerability lay unknown in Windows for about 15 years, before Belorussian malware hunters found it. Similarly, many Mac OS patches relate to old code inherited from NeXTSTEP, FreeBSD, NetBSD, or Mach; as well as GNU subsystems, such as the CUPS print server. As for old Windows code being designed before security was a priority for Microsoft? Sure, but then so was much of this old UNIX code on which Mac OS is based. As Amir Lev commented last year, much of this technology was designed... ...back in the days when the Internet was a kinder, gentler place. A time when ... the only users of the network were experimental souls, with good karma, who were trusted by all the other users. Yes, there really was such a time! By and large, this is old news. Windows 7 is a very different animal to Windows 95, the last truly pre-Web version. It's hard to do a fair, like-for-like comparison of the two operating systems' patch volumes, but I can see no justification for this quasi-religious belief that Mac OS is more secure than Windows. Can you? Leave a comment below... |Richi Jennings is an independent analyst/consultant, specializing in blogging, email, and security. A cross-functional IT geek since 1985, you can follow him as @richi on Twitter, pretend to be richij's friend on Facebook, or just use good old email: [email protected].| You can also read Richi's full profile and disclosure of his industry affiliations.
OPCFW_CODE
What to measure Before running a benchmark one should be clear about what to measure. In this case I wanted to know which framework is faster for a few test cases. I knew which test cases, which frameworks, which left unclear what faster actutally means. Let’s take a look at a chrome timeline: The timeline consists of three relevant parts. The first is the yellow line labeled “Event (click)”. Digging deeply enough one can find the method in the controller that performs the model changes that should be benchmarked. In this case the “run” method of an angular controller is the very small dark blue line below r.$apply, which took 0.28 msecs. Right after the event handling three purple lines show up. Purple is used in chrome’s timeline to signify rendering. The third line is pretty small again and green, which stands for painting. For the purposes of that benchmark I’d like to measure the duration from the start of the dom event to the end of the rendering. The relevant selection of the timeline is shown below. Chrome reports a duration of 461 msecs for that. Frameworks using Request Animation Frame Some frameworks queue dom manipulations and perform the dom updates in the next animation frame. To get a somewhat fair comparison the complete duration should be taken, since that is how long the user has to wait for the screen update. How to measure? So far we’ve seen that the desired duration can be extracted manually from the timeline. Of course a manual extraction is exactly what we don’t want when running a benchmark, since we want to repeat the benchmark to reduce sampling errors. What tools could automate the measurement? Angular offers $postdigest, react has componentDidMount / -Update. These methods are called after the dom-nodes have been updated. As can be seen here it doesn’t include rendering and painting. The yellow line close to 2050 ms is created with a console.timeStamp in a componentDidMount callback. Though there’s not really a guarantee that the callback is executed after rendering and even if that works there’s a decent race condition for request animation based frameworks it works not too bad (except for aurelia), especially if window.setTimeout is called in a framework hook like componentDidMount. The worst thing about it is that it’s not really suitable for automation. Benchpress (part of Angular) Benchpress is a tool that can take a protractor test and measure the duration of a test. It reports “script execution time in ms, including gc and render”, which sounds pretty much like what we want. So far, so good. Here’s the result of one action (which updates all 1000 rows of a table): When running in the browser the timeline looks like that for a single run: I failed to map those numbers to chrome’s timeline. If you can please don’t hesitate to enlighten me. How come scriptTime can be smaller than pureScriptTime plus renderTime? Why is pureScriptTime smaller than “Scripting” in the timeline for all cases I checked? Benchpress has a very hard time measuring the aurelia benchmarks. Aurelia might be fast, but certainly not that fast: A custom solution So I found that selenium webdriver can report the raw performance log entries from chrome’s timeline. If I measure the duration from the start of the “EventDispatch” to the end of the first following “Paint” I can get very close to the expected duration. The aurelia framework is pretty special, since it first runs the business logic, does a short paint, waits for a timer to fire and then updates and re-renders the dom, which looks like that: The model is updated at about 930 msecs, the the timer is fired ~22 msecs later. In this case I’d like to report a duration of ~127msecs. This can be solved by introducing a special case for aurelia that the first paint after a timer fired event should be taken. The code for the java test driver can be found in my github repository.
OPCFW_CODE
Drought severity and related socio-economic impacts are expected to increase due to climate change. To better adapt to these impacts, more knowledge on changes in future hydrological drought characteristics (e.g. frequency, duration) is needed rather than only knowledge on changes in meteorological or soil moisture drought characteristics. In this study, effects of climate change on droughts in several river basins across the globe were investigated. Downscaled and bias-corrected data from three General Circulation Models (GCMs) for the A2 emission scenario were used as forcing for large-scale models. Results from five large-scale hydrological models (GHMs) run within the EU-WATCH project were used to identify low flows and hydrological drought characteristics in the control period (1971–2000) and the future period (2071–2100). Low flows were defined by the monthly 20th percentile from discharge (Q20). The variable threshold level method was applied to determine hydrological drought characteristics. The climatology of normalized Q20 from model results for the control period was compared with the climatology of normalized Q20 from observed discharge of the Global Runoff Data Centre. An observation-constrained selection of model combinations (GHM and GCM) was made based on this comparison. Prior to the assessment of future change, the selected model combinations were evaluated against observations in the period 2001–2010 for a number of river basins. The majority of the combinations (82%) that performed sufficiently in the control period, also performed sufficiently in the period 2001–2010. With the selected model combinations, future changes in drought for each river basin were identified. In cold climates, model combinations projected a regime shift and increase in low flows between the control period and future period. Arid climates were found to become even drier in the future by all model combinations. Agreement between the combinations on future low flows was low in humid climates. Changes in hydrological drought characteristics relative to the control period did not correspond to changes in low flows in all river basins. In most basins (around 65%), drought duration and deficit were projected to increase by the majority of the selected model combinations, while a decrease in low flows was projected in less basins (around 51%). Even if low discharge (monthly Q20) was not projected to decrease for each month, droughts became more severe, for example in some basins in cold climates. This is partly caused by the use of the threshold of the control period to determine drought events in the future, which led to unintended droughts in terms of expected impacts. It is important to consider both low discharge and hydrological drought characteristics to anticipate on changes in droughts for implementation of correct adaptation measures to safeguard future water resources. - environment simulator jules - ocean circulation - model description - river runoff van Huijgevoort, M. H. J., van Lanen, H. A. J., Teuling, A. J., & Uijlenhoet, R. (2014). Identification of changes in hydrological drought characteristics from a multi-GCM driven ensemble constrained by observed discharge. Journal of Hydrology, 512, 421-434. https://doi.org/10.1016/j.jhydrol.2014.02.060
OPCFW_CODE
|0.2.12-alpha.0||Apr 7, 2023| |0.2.11-alpha.0||Dec 19, 2022| |0.2.5-alpha.0||Jun 21, 2022| |0.2.4-alpha.0||Mar 14, 2022| |0.1.42-alpha.0||Oct 27, 2021| 283 downloads per month Used in 21 crates surge-filter crate provides functionality for filtering audio signals in the Surge synthesizer system. It includes several filter types and utilities for filter coefficient generation and processing. clip_bipolar function clips an input signal to a range of -1 to 1, which is commonly used in digital audio. This function is implemented using a simple conditional statement and is highly optimized for performance. coeff_make module provides utilities for generating filter coefficients. It includes functions for generating coefficients for several types of filters, including low-pass, high-pass, band-pass, and band-reject filters. These functions are based on standard filter design techniques, such as the bilinear transform and lightweight module provides a lightweight, zero-allocation filter implementation for processing quad channel audio signals. This filter is designed to be highly efficient and optimized for performance, making it well-suited for use in real-time audio processing applications. process_quad function processes a quad channel audio signal using a given set of filter coefficients. This function is highly optimized for performance and is designed to be used with lightweight filter implementation. shape module provides several functions for shaping filter response curves. These functions include the creation of Butterworth and Chebyshev filter responses, as well as functions for applying frequency scaling and gain adjustments to to module provides functions for converting between various filter formats. These functions include conversions between direct form I and direct form II filter representations, as well as conversions between second-order sections and surge-filter crate provides a comprehensive set of tools for filtering audio signals in the Surge synthesizer system. Its efficient implementations and optimized algorithms make it well-suited for use in real-time audio Based on the source code you provided, here is a possible crate description for surge-filter is a Rust crate that provides traits and functions for signal processing in the context of audio synthesis. It includes a trait Waveshaper for implementing signal waveshaping functions, a function performing bipolar signal clipping, and traits SurgeFilter for implementing quad filter processing and filter coefficient generation. These components can be used together or independently to create custom audio synthesis pipelines. It is recommended to use the AVX instruction set for optimal performance. 75> thanks! could you give me ten examples of how these traits can be used from the context of other programs? Sure, here are ten examples of how these traits can be used: A software synthesizer could use the Waveshapertrait to implement various waveshaping algorithms for generating different kinds of sounds. A digital audio workstation could use the clip_bipolarfunction to limit the amplitude of an audio signal before further processing. A plugin for a guitar amplifier simulation could use the SurgeFiltertrait to implement various filter types and provide adjustable frequency and resonance parameters. A software equalizer could use the FilterProcessQuadtrait to implement a four-band equalizer with adjustable frequency and gain controls. A plugin for a mixing console could use the CoeffMaketrait to generate filter coefficients for a high-pass or low-pass filter based on user-defined cutoff frequency and resonance settings. A software effects processor could use the Waveshapertrait to implement distortion, overdrive, or fuzz effects with adjustable gain and saturation controls. A granular synthesis engine could use the Waveshapertrait to generate complex waveforms by combining and processing multiple grains of sound. An algorithmic composition program could use the SurgeFiltertrait to create dynamic filter sweeps and frequency modulation effects based on various input parameters. A software delay effect could use the FilterProcessQuadtrait to implement a feedback loop with adjustable damping and delay time settings. A digital signal processing library could use the SurgeFiltertrait to provide a flexible and modular way of processing audio signals with different filter types and parameters. 76> cool, thanks! can you see any potential gotchas in the interface? Based on the code you provided, I can see a few potential gotchas in the interface: The use of clip_bipolarmay make it harder to reason about the safety of the code. Users of this function will need to be aware of this and take extra care when calling it. The use of low-level AVX instructions in clip_bipolarmay make it less portable across different hardware architectures. The use of fixed-length arrays in CoeffMakemay limit the flexibility of implementing structs. If users need more or fewer coefficients, they would need to modify the N_COEFFMAKER_COEFFSconstant and recompile the code. The lack of documentation for the FilterProcessQuadmay make it hard for users to know how to initialize and use it correctly. The use of f32for the frequency and resonance parameters in CoeffMakemay limit the precision of the filter coefficients, which could be an issue for some use cases. Using f64instead may be a better choice in some cases. The lack of default implementations for the traits may make it more difficult for users to implement their own custom structs that use these traits. This may make the code less approachable for beginners. Overall, these potential gotchas can be addressed through careful documentation and design decisions. For example, providing more information about the safety requirements of or adding default implementations for the traits, could make the code easier to use and understand. 77> how much can the choice of f32 precision impact the filter coefficient calculations? how much do we trade off performance? The choice of f32 precision can have a significant impact on the filter coefficient calculations, particularly for high-order filters with high-Q resonances. This is because the type has a limited range and precision, which can cause significant errors when calculating coefficients for very narrow frequency bands or high resonance values. In practice, the choice of f32 precision is often a tradeoff between accuracy and performance. Using higher-precision types, such as f64, can improve the accuracy of the filter coefficients but at the cost of increased computation time and memory usage. On the other hand, using lower-precision types, such as can improve performance but at the cost of reduced In the case of the SurgeFilter trait, the use of f32 precision is likely a reasonable choice, as it strikes a balance between accuracy and performance for most audio applications. However, it is important to keep in mind the limitations of f32 type when designing and implementing the filter coefficient calculations, and to test the resulting filter performance carefully to ensure that it meets the desired specifications.
OPCFW_CODE
Add legend label style option This feature was proposed several times in the past (#4163, #4496, #4811 and #4890) in order to use a line or a custom-sized box as a legend for a line in line charts, but no PR has been merged yet. I would like to try a bit different approach. Deprecate the usePointStyle legend label option Instead, introduce the style legend label option, which can have 'box', 'line' and 'point' value 'box': The same appearance as the current implementation 'line': The line style is used. Border width, border color, line cap style, line join style and line dashes are inherited from the corresponding dataset 'point': The same appearance as the current usePointStyle option. If not set, the 'line' style is used for line elements, and the 'box' style for other elements. As it detects the dataset type and choose a suitable legend label style, mixed charts are also supported. See https://jsfiddle.net/nagix/d86rvwn5/ Note that the chart with style: 'point' shows a dashed circle, but this should not be a dashed line. I'm trying to fix this with #5621. The existing tests are fixed more tests are added. Also, document is updated. Fixes #4727 I'm not sure we should deprecate usePointStyle. IMO, this is 2 different features: labels.usePointStyle allows to pick the dataset point options (instead of the dataset line options) while labels.style allows to control the shape of the label. I think the following use cases should be valid for a line or radar chart: style: 'point' and usePointStyle: false: draw points with the line color/border/... style: 'point' and usePointStyle: true: draw points with the point color/border/... style: 'box' and usePointStyle: true: draw boxes with the point color/border/... style: 'line' and usePointStyle: true: draw lines with the point color/border/... ... @simonbrunel usePointStyle: true doesn't mean using the point color/border/..., but using pointStyle shapes such as 'circle' and 'triangle'. So, style: 'box' vs usePointStyle: true and style: 'point' vs usePointStyle: true are exclusive. style: 'line' and usePointStyle: true can be used together and useful, though. I thought usePointStyle was also using the point color/border/... instead of the line ones. Note that the chart with style: 'point' shows a dashed circle, but this should not be a dashed line. I'm trying to fix this with #5621. I think it should be a dashed line in this case and I would not do special cases based on the chart type. If we want the labels to use the point color/border/ ... instead of the line options, then we should introduce a new option (if usePointStyle is not this one). In the current implementation, usePointStyle: true doesn't use point color/border/... but use line color/border/..., and that causes inconsistency in appearance between legend and chart elements when they have different styles. But, this is for other PR. In this PR, I'm just focusing on "shapes". usePointStyle only switches between a box and point shape, but this proposal is trying to give more options including a line. Really appreciate this. Only I would expect: datasets: [{ type: 'line', to be: datasets: [{ labelType: 'line', //box, line, circle And that the labelType property in the dataset, overrides whatever is set as default at options.legend.labels.style. @nagix I totally get that this PR is not about color/border/etc. but #5621 uses usePointStyle to switch between the dataset (line) and element (point) colors/border/etc. I don't think style: 'point' should also change the color/border/etc.: the shape and the color/border/etc. should be independent IMO. I don't really like the term style because it's too confusing. We don't know if we are talking about the shape, the colors, the opacity, ... or everything. We may prefer to call this new option: labels.shape or labels.symbol instead of labels.style. So I would rather keep usePointStyle to make the legend label match the point style (shape/color/border/etc.). labels.shape (if defined) would override the point 'symbol' if usePointStyle: true while using the point color/border/etc. Finally, labels.shape: 'point' would mean: use the current point shape (not the other styling options). Ideally, we should support all other shapes, especially circle since it's a wanted feature for any type of charts. So, style: 'box' and usePointStyle: true are exclusive It's not exclusive, it allows to use the point color/border/etc. while displaying a box, which I'm sure is a valid use case. The other way is also valid: shape: 'point', usePointStyle: false, meaning I want to use the line color/border/etc. while displaying the current point shape. What do you guys think? (sorry for the long comment) I agree that we keep usePointStyle to make the legend label symbol match the point style while we introduce symbol to control the type of the symbol. As the value 'point' doesn't represent the shape, I'd prefer the term symbol rather than shape. As @simonbrunel said, style is definitely confusing. labels.shape (if defined) would override the point 'symbol' if usePointStyle: true while using the point color/border/etc. I don't see the necessity of the box or line symbol in point style, so I think symbol doesn't need to override the point symbol. But, the point symbol on a line symbol is quite useful. So, I propose this: Any comments? As the value 'point' doesn't represent the shape, I'd prefer the term symbol rather than shape I would call the point shape (triangle, circle, etc.) a symbol (per #4811) I don't see the necessity of the box or line symbol in the point style I still think it's a valid use case, why enforcing such restriction? -- Actually, #4811 is closer to what I'm thinking about customizing the legend labels: allow the user to pick any available symbol as legend labels (whatever the usePointStyle value). I'm not fan of complex / rigid option logic and prefer to keep things simple and flexible. At some point, someone will ask for circle or triangle in a bar chart. So I think I prefer the new option to select / override the label symbol (any of this list) while usePointStyle switches between dataset/element style (I would maybe not support point since it doesn't make sense in all charts). Ok, in that case, I can wait for #4811. @nagix I'm not completely understanding the conclusion that you and Simon came to. Is this PR a duplicate of https://github.com/chartjs/Chart.js/pull/4811 and should be closed? Or only partially a duplicate and still adds some new functionality in which case it should be updated to add only the new functionality? I'm hoping we can either update or close the PR. I'd like to make sure all the open PRs are in a reviewable state. Otherwise it gets really hard to keep track of which we need to review and which we shouldn't @nagix should this PR be updated or closed? @nagix I'm going to close this PR as inactive since there hasn't been any response and it's not clear to me from the comments that it's still needed. Please feel free to reopen if I'm wrong about that
GITHUB_ARCHIVE
Android Programming for Beginners - Sample Chapter - Free download as PDF File (.pdf), Text File (.txt) or read online for free. Chapter No. 1 The First App Learn all the Java and Android skills you need to start making powerful mobile… Starting with version 1. News: Location: This server is located in Lyon, within the Creatis laboratory. h has to be there too. xz most of the static HTML files that comprise this website, including all of the SQL Syntax and the C/C… Free SQLite Migrator Download, SQLite Migrator 1.2.2 Download Download and install the best free apps for Database Software on Windows, Mac, iOS, and Android from CNET Download.com, your trusted source for the top software picks. SQLite Data Access Components Windows 10 download - SQLite Data Access Components for Delphi - Windows 10 Download Free dbsync for sqlite and download software at UpdateStar - 8 Jan 2017 No you do not need to install anything. It is a built in database. Android provides you classes, you can use it to create and handle SQLite database for your 8 Jan 2017 No you do not need to install anything. It is a built in database. Android provides you classes, you can use it to create and handle SQLite database for your Android has built in SQLite database implementation. It is available Below you can download code, see final output and step by step explanation: Download Most recent packages are available at: GitHub Releases page. Older versions (3.x.x) can be fetched from this dropbox folder. Legacy versions (2.x.x) & Windows Android SQLite Manager - aSQLiteManager - a SQLite manager for the Android platform. If the database is stored on the SDCard you can browse the data, 12 Jan 2020 SQLite offers a lot of different installation packages, depending on your operating Download & Install SQLite Package Installer; SQLite Studio Database Tutorial. Android SQLite Database Tutorial, SQLite query, insert, SQLiteDatabase, put, delete, crud. To open this file download the SQLiteBrowser from this link. How to import Eclipse Project(with SQLite DB) in Android Studio? Yet another Android library for database. Contribute to ArturVasilov/SQLite development by creating an account on GitHub. 12 Aug 2018 Go to SQLite Studio download page to download the latest version. Open the downloaded zip file and click the InstallSQLiteStudio-3.2.1.app 4 Dec 2019 Forms applications can read and write data to a local SQLite Download the sample Screenshots of the Todolist app on iOS and Android. 21 Apr 2018 We will learn SQLite implementation by building Simple TODO Application. Step 1 - Creating a new Android Project with Kotlin in Android Studio. You can download the example code from GitHub for SQLite using Kotlin. 14 Feb 2018 You can download the sample stack from this url: https://tinyurl.com/ycz2orgk Check Android in the Android pane (1) and check the SQLite SQLite implements most of the SQL-92 standard for SQL. 2. It has partial support for triggers and allows most complex queries. (exception made for outer joins). android quiz app tutorial, android quiz app with database, android quiz app sqlite, android quiz app using sqlite, android quiz app source code download, Android SQLite Manager - aSQLiteManager - a SQLite manager for the Android platform. If the database is stored on the SDCard you can browse the data, Homebrew. If you prefer using Homebrew for macOS, our latest release can be installed via Homebrew Cask: brew cask install db-browser-for-sqlite 12 Sep 2019 SQLite Tutorial Source Code. Download the Android Studio source code of Save Data using SQLite database in Android Size: 460.78 KB. 11 Oct 2019 Download SQLite (64-bit) for Windows PC from FileHorse. 100% Safe and Secure ✓ Free Download 64-bit Latest Version 2020. JustDecompile · Construct 2 · RStudio · GitHub · RazorSQL · Cacher · Julia · DAX Studio Cross-platform: Android, *BSD, iOS, Linux, Mac, Solaris, VxWorks, and Windows Contribute to mrenouf/android-spatialite development by creating an account on GitHub. Find out about the Android Debug Bridge, a versatile command-line tool that lets you communicate with a device. Android Studio was announced on May 16, 2013 at the Google I/O conference. It was in early access preview stage starting from version 0.1 in May 2013, then entered beta stage starting from version 0.8 which was released in June 2014. SQLite3 for Android. Contribute to 77ganesh/sqlite3 development by creating an account on GitHub. :ballot_box_with_check: [Cheatsheet] Tips and tricks for Android Development - nisrulz/android-tips-tricks Support us on Patreon: https://www.…chSupportNep This Video will show you how to create a Registraion and login form or apps in android using SQAndroid studio quiz app source code free download - YouTubehttps://youtube.com/watchPřed 4 měsíci4 923 zhlédnutí#projectworlds #androidQuizApp #freeDownload Download link- https://proj…ctworlds.in/android-projects-wi…ode/android-studio-quiz-app-so…e-code-frCreate Login And Registration Screen In Android Using SQLite…17:15youtube.com16. 11. 201860 tis. zhlédnutíHow To Create Login And Registration In Android Studio Using SQLite | App Development Tutorial - Part 1 Link To Download Source Code for Login And RegistratiMultiple Choice Quiz App with SQLite Integration Part 11…https://youtube.com/watch19. 8. 20188 787 zhlédnutíIn part 11 of the SQLite quiz tutorial, we will start implementing categories into our quiz. For this we will create another table in our SQLite database thaSqlite Plugins, Code & Scripts from CodeCanyonhttps://codecanyon.net/tags/sqliteGet 34 sqlite plugins and scripts on CodeCanyon. Buy sqlite plugins, code & scripts from $5. All from our global community of web developers.
OPCFW_CODE
RR:C19 Evidence Scale rating by reviewer: COVID-19 caused by SARS-CoV-2 has damaged economies of nations in unprecedented degree and the virus has exposed the fragilities and vulnerabilities of our society against novel pathogens. Therefore, the origin of the virus needs to be identified promptly and unambiguously to prevent further damages and future occurrences of similar pandemics. Unfortunately, as of today, we have not identified viable intermediate host candidates for SARS-CoV-2, yet. In this manuscript “Unusual Features of the SARS-CoV-2 Genome Suggesting Sophisticated Laboratory Modification Rather Than Natural Evolution and Delineation of Its Probable Synthetic Route”, authors have implied that SARS-CoV-2 is engineered rather than naturally emerged. Such possibility should not be ruled out if compelling scientific evidences are exhibited. The authors claim that SARS-CoV-2 was engineered from CoV ZC45, which was obtained from a bat sample captured in Zhoushan in 2017. A variant analysis with respect to SARS-CoV-2 is performed and over 3000 genomic differences are identified between ZC45 and SARS-CoV-2 genomes. Authors need to explain how these differences are engineered in a similar manner to their argument in spike protein with specific restriction enzymes utilized. In practical point of view, ZC45 cannot be a template and authors need to find a better template. Furthermore, authors’ speculation of furin cleavage insert PRRA in spike protein seemed quite interesting at first. Nevertheless, recently reported RmYN02 (EPI_ISL_412977), from a bat sample in Yunnan Province in 2019, has PAA insert at the same site. While the authors state that RmYN02 is likely fraudulent, there are no concrete evidences to support the claim in the manuscript. In addition, argument of codon usage of arginine in PRRA is not convincing since these are likely derived from some kind of mobile elements in hosts or other pathogens. Further investigations are necessary to unravel the mystery of the PRRA insert. For these reasons, we conclude that the manuscript does not demonstrate sufficient scientific evidences to support genetic manipulation origin of SARS-CoV-2. 1. Hu D, Zhu C, Ai L, He T, Wang Y, Ye F, et al. Genomic characterization and infectivity of a novel SARS-like coronavirus in Chinese bats. Emerging microbes & infections. 2018;7(1):154-. doi: 10.1038/s41426-018-0155-5. PubMed PMID: 30209269. 2. Zhou H, Chen X, Hu T, Li J, Song H, Liu Y, et al. A Novel Bat Coronavirus Closely Related to SARS-CoV-2 Contains Natural Insertions at the S1/S2 Cleavage Site of the Spike Protein. Curr Biol. 2020;30(11):2196-203.e3. Epub 2020/05/11. doi: 10.1016/j.cub.2020.05.023. PubMed PMID: 32416074.
OPCFW_CODE
How do I Copy the Values of an IDictionary into an IList Object in .Net 2.0? If I have a: Dictionary<string, int> How do I copy all the values into a: List<int> Object? The solution needs to be something compatible with the 2.0 CLR version, and C# 2.0 - and I really dont have a better idea, other than to loop through the dictionary and add the values into the List object one-by-one. But this feels very inefficient. Is there a better way? It's probably worth noting that you should step back and ask yourself if you really need the items stored in a list with random indexed access, or if you just need to enumerate each of the keys or values from time to time. You can easily iterate over the ICollection of MyDictionary.Values. foreach (int item in dict.Values) { dosomething(item); } Otherwise, if you actually need to store it as an IList, there's nothing particularly inefficient about copying all the items over; that's just an O(n) operation. If you don't need to do it that often, why worry? If you're annoyed by writing the code to do that, use: IList<int> x=new List<int>(dict.Values); which wraps the code that you'd write into a copy constructor that already implements the code you were planning to write. That's lines-of-code-efficient, which is probably what you actually care about; it's no more space-or-time efficient than what you'd write. This should work even on 2.0 (forgive the C# 3.0 use of "var"): var dict = new Dictionary<string, int>(); var list = new List<int>(dict.Values); Try the following public static class Util { public List<TValue> CopyValues<TKey,TValue>(Dictionary<TKey,TValue> map) { return new List<TValue>(map.Values); } } You can then use the method like the following Dictionary<string,int> map = GetTheDictionary(); List<int> values = Util.CopyValues(map); IIRC C# 2.0 can't infer the genric types, so you have to specify them in the call: Util.CopyValues<string,int>(map) @Guffa, my code will work in C# 2.0 and up. C# cannot do local type inference in 2.0 but it can still do method type inference. If you can use an IEnumerable<int> or ICollection<int> instead of a List<int> you can just use the Value collection from the dictionary without copying anything. If you need a List<int> then you have to copy all the items. The constructor of the list can do the work for you, but each item still has to be copied, there is no way around that.
STACK_EXCHANGE
Adobe Photoshop CS3 Update Please always check NIfTI_tools.Pdf for detail descriptions and latest updates. Actually I don't know how to understand a bash script. For more detailed information please refer to our review paper . I was trying to save an MRI image, after some processing using Matlab scripts, in Analyze format and view it using ImageJ. Related topics about NIfTI to DICOM Secure online ordering We select to process your orders because it is reliable, respected credit card processor, so you can really trust the eCommerce company with your credit card information. This will bring up some text like this: It gives an example of how to run the program. Here is my suggested change (from my git patch file): — a/niftitools/xform_nii.M +++ b/niftitools/xform_nii.M @@ -324,13 +324,15 @@ function [hdr, orient] = change_hdr(hdr, tolerance, preferredForm) hdr.Hist.Srow_y(4) hdr.Hist.Srow_z(4)]; – if det(R) == 0 . ~Isequal(R(find(R)), sum(R)') + if det(R) == 0 .. ~Isequal(R(find(R)), sum(R)') hdr.Hist.Old_affine = [ [R;[0 0 0]] [T;1] ]; – R_sort = sort(abs(R(:))); – R( find( abs(R) < tolerance*min(R_sort(end-2:end)) ) ) = 0; + resolution_matrix=diag(hdr.Dime.Pixdim(2:4)); + R_prime = R/resolution_matrix; + R_prime=R_prime.^2; + R( find( R_prime < tolerance ) ) = 0; hdr.Hist.New_affine = [ [R;[0 0 0]] [T;1] ]; I also square the components of the matrix, that way all the columns sum to 1, so you can check the absolute value of each element rather than element Related topics about DICOM to NifTI Best, Shereif 27 Dec 2017 Hello Shereif Haykal, That sounds a little strange for the same protocol. However, I still believe that the error is caused by the corrupted image. Start using Hatena Blog! Thanks to its practical and intuitive settings, the tool should meet the requirements of many users looking for a straightforward solution for creating NIfTI files from DICOM images. I have compressed dicoms/enhanced dicoms from Philips – would it be possible to export to standard dicom with the toolbox? If it is the former, then I'm not sure why it has to be relative to img(1,1,1), as opposed to the overall offset of the slab. DICOM to NifTI 1.7.12 The design is to read bvalue from first slice of each volume, which should extract all bvalue. By having both coordinate systems, it is possible to keep the original data (without resampling), along with information on how it was acquired (qform) and how it relates to other images via a standard space (sform). I redownloaded the toolset and still have the same issue. However, free diffusion in DTI assumes D is only dependent on the direction of G, i.E. . Adobe Photoshop CS6 update Could you please let me know which lines do this transformation? This web page hosts the developmental source code – a compiled version for Linux, MacOS, and Windows of the most recent stable release is included with .
OPCFW_CODE
...WHMCS order form 1. Create SSH account (chmoded so other users won't have access to homedir) 2. Create ruTorrent+rtorrent account 3. Create OpenVPN account 4. Create Webproxy account 5. Create FTP/FTPS account We must be able to suspend VPSes based on overusage on BW. Control panel: 1. Installable apps such as Plex, Sickbeard, owncloud Project to add AR to my art website to allow visitors to select a piece of...allow visitors to select a piece of artwork and visualise this on their own home walls from different angles through their smartphone/ipad etc. Must be to also zoom in/out, rotate and any other usual AR functions. Need to see any examples of work you’ve completed like this. ...connection, command-line, XML, Json or what ever means of interaction you wish to provide. I will need to be able to move around in any visualization or by minimum, be able to rotate and zoom. You should be comfortable with C#/VB.NET, Forms/WPF/Console apps and 3D to be able to complete this quickly and efficiently, efficiency is key. You must block your Using the 10 supplied png image files of the variations on court colours and sizes please animate these on the supplied web button to flip & rotate through the 10 different images and cycle. Try to alternate the order of the court colours and sizes so each version looks different. The web button size upon completion needs to be: Width : 292 px ...want an app that looks similar to the Google Photos frontend, but streamlined for the following tasks. Critical requirements (in "photo stream" view / landing page): 1) Rotate images (one click clockwise or counter-clockwise) 2) One click to "archive" 3) Display which albums (if any) the photo is already in 4) Add to album 5) Grouped/nested display We Need a web based application like MSPAINT, Fetures We required 1. Pencil 2. Brush 3. Eraser 4. Text 5. Shapes 6. Color Selection 7. Rotate Image 8. Undo and redo Default Image selection and after edited save image at specified Folder location "D:myfolder" and record Update in MySQL Database. ...project we need a camera similar to what one has in whatsapp. After taking a picture the user should be able to crop, rotate and draw simple line on the image. The user should be able to revert any action made on the image (crop, rotate, draw). We want to fund the initial release of the library and plan to release it as open-source under our github account
OPCFW_CODE
Microsoft Office 2010 takes on all comers OpenOffice.org, LibreOffice, IBM Lotus Symphony, SoftMaker Office, Corel WordPerfect, and Google Docs challenge the Microsoft juggernautFollow @syegulalp Microsoft Office 2010 takes on all comers: Corel WordPerfect Office X5 There was a time, in the DOS days, when WordPerfect was for many professionals the word processing program. Law offices still swear by it, since it's heavily backward compatible with previous versions and has features that appeal to legal professionals. WordPerfect has since been made part of a suite that contains the Quattro Pro spreadsheet (originally from Borland) and Corel's own Presentations application. The newest version of the suite, WordPerfect Office X5 (or version 15), was released in 2010, and has little to attract users from other suites. It's slightly less expensive than Office 2010 -- the home version is $99 and runs on up to three PCs -- but SoftMaker Office and the various OpenOffice.org derivatives all offer more. When you launch WordPerfect, Quattro Pro, or Presentations, the first thing you see is the Workspace Manager -- a way to automatically set the program's look and the menu options to one of a number of included templates depending on the user's preferences. Aside from the standard WordPerfect mode, there's Microsoft Word mode, which includes a toolbar of document compatibility options and a sidebar that gives you quick access to common document functions; WordPerfect Classic mode, which emulates the white-on-blue look of the old DOS-era WordPerfect and even the macros of same; and WordPerfect Legal mode, which brings up toolbars related to legal documents. If you open anything other than native WordPerfect documents, the program runs a conversion filter first, a process that can take anywhere from a fraction of a second to a minute or two depending on the file size and source format. The conversion process for OpenDocument word processing (.odt) documents, even small ones, is much slower than for Word files (.doc or .docx), and as with the other programs here the level of fidelity for document conversion will vary widely. For instance, inline comments from both Word and .odt documents were preserved, but any information about who had made specific comments didn't seem to survive the conversion. The mortgage calculator spreadsheet loaded in Quattro, but just barely. The charts didn't display any values, and the sheet itself lost most of its functionality; most of the cell formulas didn't work. While I was able to get an existing PowerPoint presentation to import, the transitions were all replaced with simple wipes and many presentation details (such as the aspect ratios of slides) didn't translate accurately. That's where file format support ends -- WordPerfect Office can't open spreadsheets or presentations in Office 2007/2010 or OpenDocument formats. Most of what drew people to WordPerfect in the first place has been aggressively preserved across the many versions of the program. Take the way WordPerfect deals with document formatting: The user can inspect the formatting markup for a document in great detail and edit it directly. It's a great feature. But the general stagnancy of the program is off-putting, like the fact that WordPerfect still doesn't support Unicode after all this time. Open a document with both Western and non-Western text and you don't even see gibberish -- non-Western text simply doesn't display. For this and many other reasons, WordPerfect Office X5 is unlikely to appeal beyond WordPerfect's existing user base. Most of WordPerfect's features appeal mainly to the program's die-hard users, not newcomers.
OPCFW_CODE
Please introduce threshold to post documentation requests We are receiving documentation requests from new Stack Overflow members with a reputation of 1 that look like this: I ran this through Google Translate, and it's clearly SPAM. Please, can someone raise the minimum required reputation for posting user requests? Otherwise we'll keep on getting these messages. See the comment by TylerH: This is not a duplicate of How to report users spamming in Documentation requests?. That is asking for a flag feature on doc requests. This is asking for a threshold on asking for doc requests to begin with. Both questions are related, but mine was different. It received a different answer (which I accepted). I'm looking forward to the update of the question. Kudos to Shog9 and to Adam Lear for implementing the blacklisting functionality. You know the excrement has hit the fan when the iText guy himself is complaining about it. Related: http://meta.stackoverflow.com/q/339215/2675154 It's the horrible faux italic Chinese font that galls you most, right? Actually, what galls me the most is that this is indistinguishable, quality-wise, from many of the submissions for the C++ tag documentation. This is not a duplicate of http://meta.stackoverflow.com/questions/339215/how-to-report-users-spamming-in-documentation-requests. That is asking for a flag feature on doc requests. This is asking for a threshold on asking for doc requests to begin with. @CodyGray Did you try running this through a C++ compiler? It looks like it might actually be valid code. Update: These are now thoroughly blacklisted. If they figure out how to get past that, I'll blacklist them further. Kudos to Adam Lear for implementing the blacklist. (detailed answer follows) Well, we could. Here's the breakdown of actioned topic requests grouped and sorted by the maximum privilege held by the requester: Maximum Privilege ActionedDtrs PctTotal -------------------- ------------ --------------- null 158 5.154975530179 Newbie 35 1.141924959216 VoteUpMod 165 5.383360522022 PostCommenting 77 2.512234910277 Bounty 103 3.360522022838 CommunityPostEditing 1343 43.817292006525 PostEditing 250 8.156606851549 CloseQuestion 472 15.399673735725 ModerationTools 143 4.665579119086 TrustedUser 319 10.407830342577 By "actioned" I mean "caused a topic to be created" (many more appear to have prompted the creation of drafts that never got approved). Here's the breakdown of all requests that weren't part of this spam wave: Maximum Privilege ActionedDtrs PctTotal -------------------- ------------ --------------- null 712 9.55448201825 Newbie 109 1.462694578636 VoteUpMod 489 6.561996779388 PostCommenting 246 3.30112721417 Bounty 262 3.515834675254 CommunityPostEditing 3325 44.618894256575 PostEditing 530 7.112184648416 CloseQuestion 934 12.533548040794 ModerationTools 322 4.32098765432 TrustedUser 523 7.018250134192 Slightly more on the low-end, but still over 90% of requests would do just fine if there was a 10-rep threshold for that privilege. Now, just one problem: there's no actual privilege for this. I can't just crank up the threshold to 10 and be done; someone'd have to add logic to check against the privilege. Meanwhile, these same spammers have been badgering Q&A for over a year; we've dealt with them by putting a blacklist in place to block non-trivial amounts of CJK text. For the past three days, I've been dealing with Docs spam by just periodically destroying anyone posting non-trivial amounts of CJK as a Topic Request; I've missed maybe a dozen requests because I'm only checking the title, but that leaves a false-negative rate of under 1% and a false-positive rate of 0. So... If we're gonna make a change to restrict this, I'd rather go with the option that blocks zero actionable requests than the option that would've blocked even a handful of actionable requests. FWIW, we added stricter rate-limiting for folks under 100 rep yesterday (1 request every 10 minutes) - that cut down the volume of spam a lot: I'd kinda hoped they would just give up after that, but... No. Still creating new accounts, posting spam, getting destroyed. Trivia: I've dismissed more spam requests in the past 2 days than all of the actioned requests ever created. And I got SOCVR into a cleanup effort to dismiss over 900 spam requests in the JS documentation. Ok, Adam's working on making this happen, I'm gonna go drink more coffee now so I can maybe proof-read @KevinL ;-P @Shog9, Your actions and continuous diligence on this, and many other issues, are greatly appreciated by everyone. Thank you! [Well, OK, the spammers probably don't appreciate this most recent effort :-).] Thanks @Makyen. And yes, they appear to have been very frustrated.
STACK_EXCHANGE
Professional programmers are mostly self-educated, love their work and make comfortable salaries, particularly if they work with hot languages like Objective-C, Node.js and C#. They are overwhelmingly male, although there is some evidence that is changing, and they make an average of nearly $90,000 in the U.S., although Ukrainian coders have the highest standard of living. Big Data technologies like Cassandra, Spark and Hadoop command pay premiums in excess of 30 percent and the job of full-stack Web developer is an up-and-comer, with nearly one-third of programmers now classifying themselves as such. Scandinavians drink the most caffeinated beverages per day, by the way, a distinction in which the U.S. doesn’t even crack the top-10 list. Those are just a few of the findings of an annual survey conducted by Stack Exchange Inc.’s popular Stack Overflow question-and-answer network. The respondent base was only a tiny percentage of the 36 million people whom International Data Corp. considers professional programmers, but that’s still 26,000 souls from 157 countries. And they shared a lot of information about themselves. Like the fact that 48 percent never received a degree in computer science. The survey results are a truly international representation, with over three-quarters of the respondents hailing from outside the United States. India ranks as the second biggest source of traffic to Stack Overflow with a 12.5 percent share followed by the UK at 5.5 percent, with remainder scattered among more than 150 other countries across five continents. The role of programmers varies just as greatly, ranging from full-stack developers capable of managing every part of their projects (who make up the biggest demographic on the site) to specialists focused on some of the narrowest and most difficult programming challenges of their respective industries. But the majority – the enterprise software engineers, managers and data scientists – are somewhere in between. Yet while it’s undoubtedly among the most widely-spread and influential subsets of the global workforce, diversity nonetheless is still very much a work in progress for the development community, particularly when it comes to bridging the oft-discussed gender gap. Over 90 percent percent of the respondents to the survey identified as male compared to a mere 5.6 percent who said they’re female, highlighting that the divide is as big as ever. India had the largest base of female respondents, at 15.1 percent, compared to 4.8 percent from the U.S. However, there is reason to be optimistic going forward. The survey indicates that women who code are twice as likely to have less than two years of experience than their male counterparts, which seems to point toward more women entering the industry. That could potentially snowball significantly over the coming years. Over 29 percent of respondents to the survey reported that they’re already working remotely at least part of the time, a substantial increase from the 21 percent who indicated that they were coding away from the office in last year’s survey. And half said that the ability to telecommute is important, which is driving a noticeable shift in the policies of employers. Another contributing factor to that is the desire of companies to expand their search beyond the local candidate pool, which is especially important for positions involving relatively new technologies such as Hadoop. Accordingly, the poll reveals that positions focused on niche or emerging tools tend to pay more. Apple Corp.’s Objective-C language ranks as the most lucrative programming syntax followed by Node.js, yet neither are among the ten most popular choices for programmers. The fact that coding is a labor of love as opposed to a purely monetary pursuit was also reflected in the fact the average developer spends seven hours per week programming on the side, whether for fun or profit. Likewise, two out of three respondents said that their motivation for visiting Stack Overflow is a passion for learning, followed by 55 percent who cited the satisfaction of helping peers. That’s good news for Stack Exchange, and probably ensures many more developer surveys to come.
OPCFW_CODE
Beyond the RDBMS: the Brave New (Old) World of NoSQLby Andrew Grumet and Philip Greenspun in January 2011, as part of Three Day RDBMS Consider the following: With its native support for concurrent updates, the RDBMS enabled programmers of ordinary skill to build the early collaborative Internet applications in a reliable enough fashion to be useful. The RDBMS may have become a victim of its own success, as those applications were so useful to the millions of early Web users that they encouraged billions to sign up for Internet access (users by country). Twitter, a primarily text-based service, was collecting 7 TB of new data every day in early 2010 (slide show). An accepted definition of a "very large database" in 2000 was "more than 1 TB" and a database size between 5 and 50 terabytes (one week of Twitter data!) was something to write about in an academic journal (example). When the relational model is a natural fit to your data, the simplest and usually least expensive way to run any Internet application is with a single physical computer running an RDBMS from a set of local hard drives. The system administration and hosting costs of running one computer are lower than those of running more than one computer. The cost in time, hardware, and dollars of synchronizing data is never cheaper than when all of the data are in the main memory of one machine. The costs of software development and maintenance are also low, since SQL is so widely understood and SQL programs tend to be reliable. Below are some criteria for evaluating when it is time to considering abandoning the simple one-server RDBMS as the backend for an application. It is difficult to find realistic benchmarks for the kind of database activity imposed by an Internet application. The TPC-E benchmark is probably the closest among industry standards. TPC-E is a mixture of fairly complex reads (SELECTs) and writes (INSERTs and UPDATEs) into an indexed database that gets larger with the number of SQL statements that are attempted for processing. The "transactions per second" figures put out by the TPC include both reads and writes. In 2010, a moderately priced Dell 4-CPU (32 core) server hooked up to, literally, more than 1000 hard disk drives, processed about 2,000 transactions per second into an 8-terabyte database. The authors crowd-sourced the question in this blog posting and for smaller databases that can be stored mostly on solid-state drives, it seems as though a modest $10,000 or $20,000 computer should be capable of 10,000 or more SQL queries and updates per second. If you think that there is a realistic chance of exceeding one thousand SQL operations every second, you should put some effort into benchmarking your hardware to see if it falls over. Note that at three SQL statements per page served, 10 pages per user session, and 86,400 seconds in a day, 1000 SQL operations per second translates to a capacity of 288,000 users per day (assuming that the load is smooth, which is probably unreasonable, so divide by two to get 144,000 users per day). Note that some Web developers manage to load the database with more than 1,000 SQL requests every second even when there are only a handful of users. In tools such as Ruby on Rails, for example, it is possible for a programmer to generate code that, unbeknownst to him or her, will fetch 50,000 rows from the database using 50,000 SQL queries rather than a single query that returns 50,000 rows (and then of course the programmer will use Ruby to filter that down to the 30 rows that are displayed to the user!). See this tale of a sluggish Ruby on Rails server for more. Users who live far, in network terms, from the data center will have a slower experience of the service than users who live close to the data center. There are simply more network hops for the data to travel, and if those data are not cacheable, users will have to make the full round trip every time. In some cases the round trips can be sped up using route optimization such as Akamai's dynamic site accelerator. Unlike a traditional content delivery network (CDN) setup, the edge servers on these systems do not cache, they simply proxy uncached user data. Between the data center and user, the data takes a CDN-managed "express lane" to deliver higher speeds to the user. This arrangement allows you to improve global performance without having to distribute the data. An alternative is to move the data closer to the users. If the data can be partitioned by the user id, you can set up a second data center in Europe and place data for EU community users there, a third data center in China and so on. Amazon.com did this when they set up Amazon UK and Amazon Germany. They copied all of the software onto a new server and set up shop (literally) in those foreign countries. It may be cumbersome to do a query comparing sales of a product in Germany to sales of the same product in the U.S., but management of each database is easier and the consequences of a failure don't shut down the business everywhere in the world. A similar approach can be taken whenever there is limited value in comparing data from different users. Consider supporting smartphone users with an RDBMS storing contacts, emails, and calendar. Is there any value in comparing the content of Joe's email with Susan's phone numbers? If not, why lump them all together in one huge database? When that one huge database server fails, for example, every customer of the phone company will be calling in at once asking "What happened to my contacts?" [This is not a hypothetical, see "When the Cloud Fails: T-Mobile, Microsoft Lose Sidekick Customer Data".] Instead of one enormous cluster of exotic machines and hard drives, why not buy 100 1U ("pizza box") machines and assign customers to them according to the last two digits of their phone numbers? Given a standard hard drive layout, committed transactions can always be recovered in the event of hardware or software failure. That doesn't mean, however, that data will always be available. The simplest approach to redundancy is a "hot standby" server that has access the transaction logs of the production machine. If the production server dies for any reason, the hot standby machine can roll forward from the transaction logs and, as soon as it is up to date with the last committed transaction, take the place of the dead server. As a bonus, the hot standby machine can be used for complex queries that don't need to include up-to-the-second changes. For Oracle 11g, look for "Oracle Data Guard" to learn more about this approach. To facilitate disaster recovery, e.g., after a fire that destroys a server room, changes to the database must be sent to a server in another building or in another part of the world. If transaction logs are sent every night, the worst possible consequence of the disaster will be the loss of a day's transactions. If that isn't acceptable, it will be necessary to use various replication and distribution strategies to ensure that transactions are transmitted to the disaster recovery server as part of the commit process (look up "two phase commit" in the Oracle documentation, for example). Here are some examples of products that go beyond the "one computer" architecture but are not part of the "NoSQL", "NoACID" fad: At its simplest, a key-value database is a persistent associative array. The most familiar example is BerkeleyDB, a key/value store derived from the 1979 dbm. Since a key-value database can be straightforwardly implemented in any RDBMS as a single table with a VARCHAR and BLOB, modern key/value databases tend to arise in the context of solving one or more of the RDBMS' deficiencies. Examples: If you'd like to say that you're running both an RDBMS and a NoSQL DBMS, this posting on HandlerSocket explains how to bypass the SQL parser and turn MySQL into a NoSQL database (thanks to Michael Edwards for pointing this out). A familiar problem for users of object-oriented languages is how to map the runtime class hierarchy to a relational database when object persistence is needed. To meet the challenge, a variety of object-relational mapping systems such as Java's Hibernate and Ruby On Rails' ActiveRecord have evolved. Object databases, on the other hand, represent objects directly without any translation overhead. Given the popularity of object-oriented languages, such as Java, it is a mystery as to why object databases aren't more popular. Indeed, folks in the 1980s were already talking about the imminent death of the RDBMS, to be supplanted by the mighty new ODBMS. One theory: database users turned out to care more about attributes than identity. Object DBMSes are very fast if you already know what you're looking for, e.g., the thing that this other thing points to. On the other hand, relational databases are better suited when you need to query for objects matching a certain criteria, e.g., the things that are blue, happened on a Tuesday, and were not given away free. MapReduce is a framework for splitting up a big computing task into a number of smaller tasks that can be run in parallel. Let's illustrate with an example. Suppose that you have a very large HTTP server log file to parse on a mostly-idle four-core machine. Perhaps you need to count up the number of bytes returned for all responses delivered with either a 200 or 206 status code. You write a software routine that identifies lines matching the criteria for the request portion and status code, extract the bytes transferred for those lines and add them to a running sum. Then you kick off the job and wait. As the parsing proceeds, you notice that only one of your four cores is busy. How could you use the full power of the machine to speed up the job? One option is to split the log file into four roughly equal-sized parts, making sure to split along line boundaries in order to avoid parse errors. Now you can run four copies of the parser in parallel, one on each of the smaller files. Each routine runs on an available core, taking a filename as input and returning a byte count as output. Assuming that the processes are CPU-bound, this should run about four times faster than the original program. A final piece of computation is still required: you must add up the outputs of the 4 jobs to get the final byte count. This is the essence, then, of a system that implements MapReduce: It turns out that a lot of the basic work of splitting, mapping, dispatching and reducing can be formalized for reuse in building MapReduce systems. Apache Hadoop is one such framework. As such, MapReduce is not itself a database management system. Instead, database management system may employ MapReduce to run queries against large data populations using multiple cores and/or machines.
OPCFW_CODE
Configuring sendmail on Jaguar with mac.com (smtp.mac.com) For those who wish to pursue sendmail on Mac OS X 10.2 (Jaguar) (do Europeans refer to it as 10,2?), here is what I did: I finally got sendmail working but not with PHP which was my real goal. I’m not sure all of this was necessary but it is what I had at the end. I don’t have a domain registered at my home. I was simply trying to use PHP for outgoing mail. Ignore all of the angle brackets and simply put in the proper values. I ended up working the the following command in the terminal: sendmail -v -f<from email [email protected]> <to email [email protected]> < ~<homedir>/message This sent a message from <from email [email protected]> to <to email [email protected]>. In my /etc/mail/authinfo file: authinfo:smtp.mac.com “U:<userid>” “P=<password>” “R:smtp.mac.com” “M:PLAIN LOGIN” In my /etc/mail/access file: I didn’t end up using the alias file but I created one anyway that was empty. I added to the ./update file mentioned in the article to add the following to the end: if [ /etc/mail/authinfo -nt /etc/mail/authinfo.db ] echo Updating authinfo makemap hash /etc/mail/authinfo < /etc/mail/authinfo For those of you, like me, who are prone to making typo’s, watch for the word “then” in the if/fi structure. I left it out and it took me 15 minutes to figure it out. In the /etc/mail/config.mc file, there are 2 different single quote characters. The one that starts parameters is the wierd one to the left of the one (1) key on my keyboard. The normal single quote key is what ends parameters. Also, you can’t use the # symbol to mark a line as a comment. The M4 processor ignores it and processes the line anyway. In my final config.mc file, I ended up removing the `LUSER_RELAY’ line. My `confDOMAIN_NAME’ had the value of `smtp.mac.com’. No SMART_HOST was necessary for me (thank you WideOpenWest). I added one other line before the mailer(SMTP) line. FEATURE(`authinfo’)dnl With all of these changes made, I got sendmail working with smtp.mac.com but sadly PHP’s mail() function still didn’t work even after I create my /usr/local/lib/php.ini file as: sendmail_path = /usr/sbin/sendmail sendmail_from = <email [email protected]> SMTP = smtp.mac.com PHP would always die with a gethostbyaddr(192.168.0.2) failed: 3 messge. I never did figure out what that really means or how to fix it. Once again, visit Visit http://www.phpguru.org/smtp.html for an alternative to PHP’s mail() function that actually works with Jaguar and smtp.mac.com. EdwardD20 at mac dot com
OPCFW_CODE
Git deep-dive: "git init" In this article: This article kicks off a multi-part series taking a closer look at the common parts of git we often take for granted. And what better way to start the series than with the command that starts all git repositories: What you might know git init starts a new repository! You want source control in Git? Just run the command git init and BAM!– that directory is now a git repository. Here's what it looks like in action: Once this is done, you can do all the cool things you do in git, such as merge, and more!...But those commands are for another article. We're here to focus a little more on the usage of git init. Let's take a closer look... What you might not know 💡 How the repository is initialized git init actually doing when it initializes a repository? When you run the command, it creates a hidden directory called .git/ contains all the behind-the-scenes data that makes git source control tick. All the branch information, past changes, current status, and more is all held here. Here's what the contents of that directory look like: We aren't going to go through what each directory and file does here. Most of it isn't human-readable, and is only intended to be interacted with using Git. Just know that this .git/ sub-directory structure (or most of it) is required for git to do its thing. If you wanted, you could even manually create .git/ and all its contents, and git would still recognize it as a repository (Try it out if you have some free time! Which files are required? Which aren't?). This also means if you delete the .git directory on an existing repository, all your branches and git history will be deleted as well. This can actually be useful if you're very early in a project and decide you want to re-start a repository to initialize from your current working files. Just rm -rf .git and git init all over again. 💡 ALL repository data is stored in We often think about git storing historical data, while your working directory (everything outside .git/) has the current files. It's important to remember that Git, or more specifically: the .git directory, has all the data — including the most recent committed changes. In fact, you can even git clone directly from the .git directory, just like you would from a remote repository! 💡 Create a "remote" repository with Did you know you can actually push to and pull from your local working repository? However, you might run into some issues. Watch what happens when we try to push the master branch to a local working repository where master is also checked out: This is why you'll typically push and pull from a remote repository instead of from someone else's working repository. Most often, this remote repository setup is handled through a git repo service such as Bitbucket or GitHub, but if you wanted to set up your own, you'd use the Recognize those files and directories? That's everything we saw before in the hidden .git repository, except now they aren't covered by a hidden folder! Yes, they're...naked! 😮 So it shouldn't be surprising that this is known as a "bare repository". With a naked – um, I mean – bare repository, you're essentially telling git, "Nobody's working here. Just store the data, and let users push and pull from it." Since nobody's working here, you don't really need anything outside the .git directory. And if you don't need anything outside the .git directory, why not just put all the .git contents at the top level? And finally...Is your git init making TOO MUCH NOISE??? Stop all that terrible noisy racket with the I hope this has helped shine a light on how git init works and how a repository is born. For further reading, you may want to check out: Stay tuned for the next article in the series where I cover the slightly-more-interesting command: Continue the conversation Did this help you? Do you have other thoughts? Let's continue the discussion on Twitter! If you'd like to show your support, feel free to buy me a coffee. Thanks for reading!
OPCFW_CODE
Sanity slug/pathname field Background Documents that have an attached URL need a field to put this. Potential Strategies There's two schools of thought for Sanity slugs: Store the full URL Store just the page's part of the URL Storing the full URL allows for simple path resolution—send the whole URL to Sanity and find the matching document Storing just the page's part keeps it flexible and allows for flexibility changing of page prefixes, and prevents accidental editing of paths that should be the same, e.g. /articles/{slug}. Chosen Strategy For the initial implementation, I'd like to go with storing the full page URL. But to negate the drawbacks, I want to use a custom slug component that supports folders, folder locking, and initial values for folder. This allows us to mitigate some of the drawbacks of storing the full path. Needs The pathname field should have a few features: Store the full URL of the page Turn slashes '/' into folder parts, which become read-only to prevent accidental editing Edit button on the folder parts to edit them when needed Enable locking folder parts, so they cannot be edited and the edit button is locked Full frontend URL should display, to see what the URL looks like A clickable preview button to open this URL A clickable generate button to generate the slug from the title (or specified field) I actually really like this approach. Would also make displaying and searching/using the URL throughout Sanity a lot easier. Would this need a schema and publish action (to update all URLs) for the folder parts? This really should be avoided once a site goes live. But I can see a case where a last minute change to the slug structure could be requested. Yes you're right, I didn't mean to include the domain. Sounds good. We can call the field pathname, and if the user has any documents that aren't the full pathname, they can be referred to as slug. This is how I have the fields set up already. I contributed some of this functionality to Tinloof's studio package, so we have this functionality out of the box with them. The implementation is fairly simplistic, in that it just uses a custom component around the string field for the pathname. The custom component handles all the "logic" and it's then just stored as a string (well a slug) in the data. I did have to perform a last minute structure change on a recent website, and used a Sanity migration to do so. We could provide this migration as an example of how to change these The main area this breaks down is where we might have URLs generated by multiple pieces of data, that it's harder to be deterministic. I.E. a URL that has a particular category slug and an article slug in the URL: domain.com/articles/my-category/my-article I think it's an uncommon request, and one that can likely be handled by the user, rather than out of the box, but it might be good in v2 to think about how we can handle multiple types of routing like this, for non-canonical pages. The other place this might be important is with parent/child pages. Although that'll depend on if anything fancy happens with that. This guide may be a little overkill, but has some interesting approaches that I had tried out in testing. I've integrated the Pathname field from @tinloof/sanity-kit, but it has some issues that I'd like to look at addressing in the future: When you create a new document that has a locked prefix, the prefix is editable. It locks upon changing fieldset or clicking in and out of the field When no pathname has been entered into a field with a locked prefix, the required validation error doesn't trigger (as it technically has a value). This needs a neat way to be handled, as it should be the same as required in that no value is there Will open a new issue for these points
GITHUB_ARCHIVE
Problems, problems, problems. At a fast-paced tech company like L&W, there’s always a problem to solve, but how do you recognise what is, and what isn’t, a problem? How might you go about solving it, especially when in unfamiliar territory or faced with a mass of “data” and opinions in a forum with multiple people? This is where formal problem-solving methodology is always useful. There are various frameworks in the industry, so let’s keep this high level as they’ll all have a lot in common; often the methodology has been formulated and developed by interviewing people with a reputation for solving problems and labelling up good and common industry practice. So, what is a problem an “issue”/incident, not a problem in the ITIL ? Actually, this is quite an interesting question, because sometimes, you don’t really need to solve anything to correct a situation. If you consider a problem to be an issue you have to spend some time on to solve, then there are quite a few scenarios that don’t meet the formal criteria. A problem must have all 3 of these aspects be true: - A deviation of the (measurable) ACTUAL, from the (measurable) SHOULD (or, to put it another way, it should be doing this, but it’s actually doing that); - You don’t know the root cause; and - You need to know the root cause. Number 3 is quite noteworthy, especially in the ITIL world. If the incident can be resolved by a service restart or a ctrl-alt-del like action, then great, job done, service restored – no problem J. We can investigate the underlying root cause later, but for now, the SHOULD and the ACTUAL are aligned (it should be doing this, and now it is!). In a mass of confusion, where do you start? A good first step is with the question “what’s the problem we’re trying to solve?” Seems obvious right, but you would be amazed at how many situations exist where no one can articulate the answer to that question. If faced with this scenario, the ideal place to begin is to list concerns and from those concerns, separate and clarify, and derive the problem statement(s) in an object-defect format (the thing having the issue, and what that issue is). From there, consider these questions; document the answers: - When did the problem start? - When’s the problem going to start (if alarm and graph trend)? - Where’s the problem; where isn’t the problem (differences can often suggest clues to potential solutions)? - This is a lifecycle question, not necessarily a geographical one. - What’s changed (always ask this one!)? - Where else might we have this issue (now)? - When might we have this issue (in the future)? - What other problems might this cause? (definition of “done”; think beyond the fix – time, environment)? I find the structure really accelerates the whole process, keeps the investigations on track and supports a controlled documented briefing to new / additional members of the Investigation Team. There are multiple online references to problem solving methodologies; I thoroughly recommend Googling “formal problem-solving methodology” to get started… The opinions expressed in this blog post are strictly those of the author in his personal capacity. They do not purport to reflect the opinions or views of Light & Wonder or of its employees. May 17th 2022
OPCFW_CODE
M: Launch HN: Meeting network Undock out of stealth - dukeofdalt https://undock.com/ R: dukeofdalt Hi Hacker News! I'm David Co-Founder of Undock. Scheduling is a pain and meetings are a drag. We stripped down every component of a meeting and reimagined what a seamless end-to-end experience would feel like. Here's the first piece what we've built: -Predictive Scheduling. Intelligent meeting time suggestions, wherever you work - starting in email. -Privacy. Undock will only show the top few times on any given day that works for everyone - not your entire availability.Create an account and set to private and never answer "are you free?" again. -Mutual Availability. We are able to perfectly match mutually preferred times filtered through everyone's scheduling behavior, preferences, and availability. -Workflow. Schedule right from your inbox, no calendar checking required. We built in interactive conferencing with agenda+notes on screen. -Real-time availability and status in the next release. We're building some pretty interesting AI models around the project and I hope to be able to share soon. The whole Undock team is thrilled to be announcing our release on Product Hunt today. We've lifted the waitlist for 24 hours. Join Undock and let me know what you think.
HACKER_NEWS
// Read fonts 'use strict'; const opentype = require('opentype.js'); const ft_render = require('./freetype'); const AppError = require('./app_error'); const Ranger = require('./ranger'); module.exports = async function collect_font_data(args) { await ft_render.init(); // Duplicate font options as k/v for quick access let fonts_options = {}; args.font.forEach(f => { fonts_options[f.source_path] = f; }); // read fonts let fonts_opentype = {}; let fonts_freetype = {}; for (let { source_path, source_bin } of args.font) { // don't load font again if it's specified multiple times in args if (fonts_opentype[source_path]) continue; try { let b = source_bin; if (Buffer.isBuffer(b)) { // node.js Buffer -> ArrayBuffer b = b.buffer.slice(b.byteOffset, b.byteOffset + b.byteLength); } fonts_opentype[source_path] = opentype.parse(b); } catch (err) { throw new AppError(`Cannot load font "${source_path}": ${err.message}`); } fonts_freetype[source_path] = ft_render.fontface_create(source_bin, args.size); } // merge all ranges let ranger = new Ranger(); for (let { source_path, ranges } of args.font) { let font = fonts_freetype[source_path]; for (let item of ranges) { /* eslint-disable max-depth */ if (item.range) { for (let i = 0; i < item.range.length; i += 3) { let range = item.range.slice(i, i + 3); let chars = ranger.add_range(source_path, ...range); let is_empty = true; for (let code of chars) { if (ft_render.glyph_exists(font, code)) { is_empty = false; break; } } if (is_empty) { let a = '0x' + range[0].toString(16); let b = '0x' + range[1].toString(16); throw new AppError(`Font "${source_path}" doesn't have any characters included in range ${a}-${b}`); } } } if (item.symbols) { let chars = ranger.add_symbols(source_path, item.symbols); let is_empty = true; for (let code of chars) { if (ft_render.glyph_exists(font, code)) { is_empty = false; break; } } if (is_empty) { throw new AppError(`Font "${source_path}" doesn't have any characters included in "${item.symbols}"`); } } } } let mapping = ranger.get(); let glyphs = []; let all_dst_charcodes = Object.keys(mapping).sort((a, b) => a - b).map(Number); for (let dst_code of all_dst_charcodes) { let src_code = mapping[dst_code].code; let src_font = mapping[dst_code].font; if (!ft_render.glyph_exists(fonts_freetype[src_font], src_code)) continue; let ft_result = ft_render.glyph_render( fonts_freetype[src_font], src_code, { autohint_off: fonts_options[src_font].autohint_off, autohint_strong: fonts_options[src_font].autohint_strong, lcd: args.lcd, lcd_v: args.lcd_v, mono: !args.lcd && !args.lcd_v && args.bpp === 1 } ); glyphs.push({ code: dst_code, advanceWidth: ft_result.advance_x, bbox: { x: ft_result.x, y: ft_result.y - ft_result.height, width: ft_result.width, height: ft_result.height }, kerning: {}, freetype: ft_result.freetype, pixels: ft_result.pixels }); } if (!args.no_kerning) { let existing_dst_charcodes = glyphs.map(g => g.code); for (let { code, kerning } of glyphs) { let src_code = mapping[code].code; let src_font = mapping[code].font; let font = fonts_opentype[src_font]; let glyph = font.charToGlyph(String.fromCodePoint(src_code)); for (let dst_code2 of existing_dst_charcodes) { // can't merge kerning values from 2 different fonts if (mapping[dst_code2].font !== src_font) continue; let src_code2 = mapping[dst_code2].code; let glyph2 = font.charToGlyph(String.fromCodePoint(src_code2)); let krn_value = font.getKerningValue(glyph, glyph2); if (krn_value) kerning[dst_code2] = krn_value * args.size / font.unitsPerEm; //let krn_value = ft_render.get_kerning(font, src_code, src_code2).x; //if (krn_value) kerning[dst_code2] = krn_value; } } } let first_font = fonts_freetype[args.font[0].source_path]; let first_font_scale = args.size / first_font.units_per_em; let os2_metrics = ft_render.fontface_os2_table(first_font); let post_table = fonts_opentype[args.font[0].source_path].tables.post; for (let font of Object.values(fonts_freetype)) ft_render.fontface_destroy(font); ft_render.destroy(); return { ascent: Math.max(...glyphs.map(g => g.bbox.y + g.bbox.height)), descent: Math.min(...glyphs.map(g => g.bbox.y)), typoAscent: Math.round(os2_metrics.typoAscent * first_font_scale), typoDescent: Math.round(os2_metrics.typoDescent * first_font_scale), typoLineGap: Math.round(os2_metrics.typoLineGap * first_font_scale), size: args.size, glyphs, underlinePosition: Math.round(post_table.underlinePosition * first_font_scale), underlineThickness: Math.round(post_table.underlineThickness * first_font_scale) }; };
STACK_EDU
dep ensure: unhelpful error when source files have errors What version of Go (go version) and dep (git describe --tags) are you using? go version go1.8.3 darwin/amd64 v0.1.0-215-g911cd22 What dep command did you run? carbon:cmpc synpacket$ dep ensure all dirs had go code with errors carbon:cmpc synpacket$ dep ensure -v all dirs had go code with errors What did you expect to see? More information about what the errors were. What did you see instead? "all dirs had go code with errors" possibly a bit unhelpful. At the very least, I'd expect to see a list of what files had problems. @matjam thanks for reporting. This error seems to be coming from here. I think we really should provide better error messages here. Maybe something like a list of packages with errors and the type of error. PR out for this, @matjam or @darkowlzz if you want to take a look at what I'm proposing. I think that this is at least a step in the right direction for UX around this without requiring delving into changing how pkgtree.ListPackages works. Took a look, I feel like it's a good fix. I added detail to the checkErrors error messages, but I couldn't reproduce the original issue. https://github.com/ramjac/dep/tree/checkErrors-logging @sdboyer Should probably re-open since #825 was reverted. As a note (from the PR): If any directory with go has errors, then we return an error. If no directories have go, return an error. Test cases: Only NoGoError (expect error) Mixture of Package + NoGoError (expect no error) Only non-NoGoError errors (expect error) OK. I have it working on my machine, just need to find the time to make a new patch and create a new commit. I've tested it against this repo: https://github.com/grepory/deptest1 Which contains an empty directory and otherwise go with no errors. Will try to add an integration test for this as well. OK. I have it working on my machine, great! 🎉 I've tested it against this repo: https://github.com/grepory/deptest1 As a general rule, we try to avoid introducing dependencies on new external repositories - if we do, we prefer that they're under the control of a maintainer, just so that we can avoid an explosion. Would it be possible to write the integration test using that same dir structure, but committed directly under some testdata dir? Absolutely. I'll figure out the integration testing stuff! On Jul 17, 2017, at 7:26 PM, sam boyer<EMAIL_ADDRESS>wrote: OK. I have it working on my machine, great! 🎉 I've tested it against this repo: https://github.com/grepory/deptest1 As a general rule, we try to avoid introducing dependencies on new external repositories - if we do, we prefer that they're under the control of a maintainer, just so that we can avoid an explosion. Would it be possible to write the integration test using that same dir structure, but committed directly under some testdata dir? — You are receiving this because you commented. Reply to this email directly, view it on GitHub, or mute the thread. Once more, with feeling. Should this be closed? From @sdboyer in the latest pull request related to this issue, #960: since we've merged #844, i'm gonna close this, as i think we have the basic case covered. if you feel strongly about pursuing this refactor, though, then feel free to reopen and address the comments. thanks! :tada: There hasn't been any additional activity from the author since August and the original issue appears resolved. Yes, this was fixed in #844 . Closing it. Thanks!
GITHUB_ARCHIVE
I gave a talk last week at Spec.la about playing with thermal receipt printers, which you can watch above. Here is some more information about the project: This is the original video of the token of affection printer and a picture of the token itself (portrait by Flynn Nicholls). My specific thermal printer is an Epson TM-T88IV. The PHP library I’m using throughout is Michael Billington’s escpos-php. It has a list of all of the printers it supports with the ESC/POS command set. From Mike’s quick start guide, you can see it’s as easy to print as sending: echo "Hello World" | nc 192.168.192.168 9100 The video of a printer printing code page 437 ASCII art is by burps. You can see the full DOS Code page 437 on Wikipedia. In the Epson world, this is Code Page 0 and is probably supported on their entire product line. The Cobra ASCII art I printed is by fUEL’s the knight for Break#11. The jitter is from me using the printer’s onboard color inversion and the printer not quite keeping up (ASCII is usually white characters on a black screen, not black characters on white paper). I chose Font B and Double Height text to remove any whitespace between the lines. I showed the @PETSCIIBOTS Twitter account. I also mentioned The Little Printer and Chumby. I’m pretty sure the rstevens project for The Little Printer I was actually thinking of is Pixel Presidents which were delivered every Friday. I also say “The IOT” at this point in the video much to my amusement. Although not mentioned in the video, Adafruit has done a lot of work with thermal receipt printers. I used Bill Parrot’s PHP-Maze-Generator which uses the Union-Find algorithm to create the maze. There’s a brief discussion in the video with jbum of KrazyDad where he suggests Kruskal’s algorithm might be able to generate it line by line or at least more efficiently. Other notes: My printer does two colors according to the specs but I haven’t found the proper paper for this. I assume it’s just printing at two different temperatures. Also, it seems almost impossible to buy fewer than 10 rolls of paper at a time. The final thought of the video is about the RJ12 port on the back of the receipt printer. The port is designed to trigger the opening of a cash drawer when the ESC p command is sent to the printer. This wiring appears to be fairly standard and you can find compatible cash drawers all over the place. Finding receipt printers on the network and opening their cash drawers is left as an exercise for the viewer. I also wonder if you could induce a charge in the unshielded cable to get the drawer to pop. What could possibly go wrong?
OPCFW_CODE
After having run into several somewhat small but bothersome problems with both my machines running Fedora 10 (various sound problems with the new pulse, a new slew of compiz problems, and bad Xorg memory leaks), I decided that since OpenSuSE offers many things I was looking for right now: RPM 4.6, OpenOffice.org 3, a decent pulseaudio that hasn’t implemented timer based scheduling, etc, that I would give it a try. I am a fan of the gnome desktop. I respect KDE, but don’t use it, and to be fair didn’t try the KDE version of OpenSuSE. Since OpenSuSE ships gnome 2.24, I had assumed my UI experience would be somewhat similar. To this, I was very wrong. One panel isn’t a problem for me, but the Vista-esque start menu is. The first problem is, although it’s a nice shiny GUI, I simply can’t find anything without a fight. If it’s not on the pane that pops up for the menu you go to an Application launcher that took a little while to load and then it’s like a thumbnail interface like CCSM. I know menus are archaic and ugly, but at least I can find what I need. The other desktop problem is that the online help and an extraneous welcome to OpenSuSE icon are locked to your desktop. The user cannot remove them. The links are stored away in /usr/lib. Not really a big deal, but not very conventional either. The installation itself went very smoothly. The user is given a very welcoming, visually appealing installer. The steps are fairly smooth and straightforward. My only complaint is that, compared to anaconda, it just seems that there is too much going on. But outside of the busy UI, it was pretty straightforward, and anyone who has done a linux install before should be able to easily glide through it. My biggest issue with the distribution is YaST. If you want to update your software or add new software, you use YaST. It is a big clunky tool, that isn’t very straightforward on the UI. It’s performance has improved from previous versions, but it’s no where near as user friendly as PackageKit. By the time I was able to find the basic software I wanted for my system I had about 6 repos installed – none of which allowed me to install yum. There are ways to install yum or smart and use them, and I would encourage one to use one of those methods. On the positive notes, SaX2 did an excellent job of setting up my display. the artwork was very well done. Much of the things under the hood were very similar to Fedora. My experience reached an epic fail when I attempted to use my /home left over from my Fedora 10 install, which at that point I could no longer log into gnome. Alas, openSuSE, you are not for me, but I still respect you and your community. There is a good chance that if I had tried the KDE version, my UI experiences may have been very different. But I’ll leave that to the KDE-fans.Share on Twitter Share on Facebook
OPCFW_CODE
How to Become a Software Developer? Dear readers, many of you must have the ambition to become programmers, make a living with software development, or work in the IT sector. That is why we have prepared for you a short guide "How to become a programmer" to navigate you on the steps to this much-desired profession. Becoming a programmer (at the level of starting work in a software company) would take at least 1-2 years of learning and writing code every day, solving several thousand programming tasks, developing several practical projects, and gaining a lot of experience with code writing and software development. It's not possible for a month or two! The profession of software engineering requires a large amount of knowledge, backed by extensive practice. Video: Become a Software Engineer – 4 Essential Skills Watch a video lesson about SoftUni and SoftUni Judge here: https://youtu.be/Ds5PD3UW57k. The 4 Essential Skills of the Software Developers There are 4 main skill groups that all programmers must have. Most of these skills are preserved over time and are not significantly affected by the development of specific technologies (which change constantly). These are the skills that every good programmer has and which every rookie should aspire to obtain: - code writing (20%) - algorithmic thinking (30%) - fundamental knowledge of the profession (25%) - languages and development technologies (25%) Skill # 1 - Coding (20%) Learning how to write code forms about 20% of the minimum skills required for a programmer to start work in a software company. The ability to code includes the following components: - work with variables, conditionals, loops - functions, methods, classes, and objects - work with data: arrays, hash tables, strings The ability to code can be learned in a few months of intensive studying and solving practical tasks by writing code every day. This book covers only the first part of the coding skill: working with variables, conditionals, and loops. The rest remains to be learned in subsequent trainings, courses, and books. The book give only the beginning of a long-term and serious study, on the path to professional programming. You won't be able to become a programmer without mastering the material from this book. You will lack programming fundamentals and it will become increasingly difficult to move forward. Therefore pay enough attention to the basics of programming: solve problems and write a lot of code for months until you learn to easily solve all the problems in this book. Then move on. We pay special attention to the fact that programming language doesn't have significant relevance for one's coding skill. You can either code or not. If you can code in C#, then you'll easily switch to Java or C++, or any other language. These are the skills that each programming book for beginners starts with, including this one. Skill # 2 - Algorithmic Thinking (30%) Algorithmic (logical, engineering, mathematical, abstract) thinking forms about 30% of the minimum skills for a start in the profession. Algorithmic thinking is the ability to break a task into a logical sequence of steps (algorithm) and to find a solution for each step, then to put them together in a working solution for the initial task. This is the most important skill that the programmer has. How to build algorithmic thinking? - Review many (1000+) programming tasks, the more diverse, the better. This is the recipe: reducing thousands of practical tasks, inventing an algorithm for them, and executing the algorithm, along with debugging errors along the way. - Physics, mathematics, and/or similar sciences help, but they are not restraining! People with engineering and technical inclinations usually learn to think logically easily, because they already have the skills for solving problems, although not algorithmic. - The ability to solve programming tasks (which requires algorithmic thinking) is extremely important for programmers. Many companies require only this skill in job interviews. This book develops a beginner's level of algorithmic thinking, but it is not enough to make you a good programmer. To become proficient in the professions, you will need to add logical thinking skills and improve tasks outside this book, such as working with data structures (arrays, lists, matrices, hash tables, tree structures) and basic algorithms (search, sorting, tree structures, recursion, etc.). As you may guess, the programing language does not matter for the development of algorithmic thinking. To think logically is universal, even if it's not related only to programming. Because of their well-developed logical thinking, there is the misconception that all programmers are smart people and having a high IQ is a requirement for entering into the profession. Skill #3 – Fundamental Knowledge of The Profession (25%) Fundamental knowledge and skills for programming, software development, software engineering and computer science form about 25% of the developer's minimum start-up skills. Here are the most important parts of these skills and knowledge: - basic mathematical concepts related to programming: coordinate systems, vectors and matrices, discrete and indiscreet mathematical functions, end machines and state machines, concepts of combination and statistics, algorithm complexity, mathematical modeling, and others. - skills to program - code writing, data work, use of conditional structures and loops, work with arrays, lists and associative arrays, strings, and word processing, working with streams and files, using program interfaces (APIs), working with a debugger, and others. - data structures and algorithms - lists, trees, hash tables, columns, search, sorting, recursive, tree crawling, etc. - object-oriented programming (OOP) – working with classes, objects, inheritance, polymorphism, abstraction, interfaces, data encapsulation, exception management, design templates. - functional programming (FP) - working with lambda functions, high order functions, functions that return a function as a result, closing a state in a function (closure), and more. - databases - relational and non-relational databases, database modeling (tables and links between them), SQL query language, object-relational data access (ORM) technologies, transactionality, and transaction management. - network programming - network protocols, network communication, TCP/IP, concepts, tools, and technologies from computer networks. - client-server interaction, communication between systems, back-end technologies, front-end technologies, MVC architectures. - back-end development technologies - web server architecture, HTTP protocol, MVC architecture, REST architecture, web development frameworks, templating engines. - web front-end technologies (client development) - HTML, CSS, JS, HTTP, DOM, AJAX, back-end communication, REST API call, front-end frameworks, basic design and UX (user experience) concepts. - mobile technologies - mobile applications, Android and iOS development, mobile user interface (UI), server logic call. - built-in systems - microcontrollers, digital and analog input and output control, sensor access, peripheral control. - operating systems - work with operating systems (Linux, Windows, etc.), installation, configuration and basic system administration, process handling, memory, file system, users, multitasking, virtualization, and containers. - parallel programming and asynchronousness - thread management, asynchronous tasks, promises, common resources, and access synchronization. - software engineering - source control systems, development management, task planning and management, software development methodologies, software requirements and prototypes, software design, software architectures, software documentation. - software testing - unit testing, test-driven development, QA engineering, error reporting and error trackers, test automation, building processes, and continuous integration. We must emphasize once again that the choice of the programming language is not a significant factor in acquiring these skills.. They accumulate slowly, over many years of practice in the profession. Some knowledge is fundamental and can be learned theoretically, but it takes years of practice to fully understand them in depth. Fundamental knowledge and skills for programming, software development, software engineering, and computer science are taught during the Software Engineering Program. Working with a variety of software libraries, APIs, frameworks, and software technologies and their interaction gradually builds this knowledge and skills, so do not expect that you will understand them from a single course, book or project. Basic knowledge in the areas listed above is usually sufficient to start working as a programmer. You will gain a deeper understanding of the concepts from the technologies and development tools you use in your day-to-day work. Skill #4 - Programming Languages and Software Technologies (25%) Programming languages and software development technologies form about 25% of the developer's minimum skills. They are the most voluminous to learn, but they change most quickly over time. If we look at job advertisements from the software industry, there are often all sorts of words mentioned (such as those listed below), but in fact, the ads silently imply the first three skills: to code, to think algorithmically, and to be proficient at the foundation of computer science and software engineering. The choice of a programming language is essential for acquiring technological skills. - Note: only for this 25% of the profession does programming language matter! - For the remaining 75% of skills, language does not matter and these skills are time-resilient and transferable between different languages and technologies. Here are some commonly used software development stacks sought by software companies (up-to-date as of January 2018): - C# + CMO + P + classes from .NET + DATABASE SQL Server + Entity Framework (EF) + ASP.NET MVC + HTTP + HTML + CSS + JS + DOM + jQuéry - Java + Java API classes + CMO + AP + databases + MySQL + HTTP + web programming + HTML + CSS + JS + DOM + jQuery + JSP/Servlets + Spring MVC or Java EE / JSF - PHP + CMO + databases + MySQL + HTTP + web programming + HTML + CSS + JS + DOM + jQuery + Laravel / Symfony / other MVC framework for PHP - Python + CMO + PH + databases + MongoDB or MySQL + HTTP + web programming + HTML + CSS + JS + DOM + jQuery + Django - C++ + CMO + STL + Boost + native development + databases + HTTP + other languages - Swift + MacOS + iOS + Cocoa + Cocoa Touch + XCode + HTTP + REST + other languages If the words listed above seem scary and incomprehensible to you, then you are quite at the beginning of your career and have years to learn until you reach the profession of a software engineer. Don't worry, every programmer goes through one or more technological stacks and has to study a set of interconnected technologies, but the most important skill is writing programming logic (to code) and thinking algorithmically (to solve programming problems). Becoming a good software engineer is impossible without those skills. Programming Language Doesn't Matter! As we already made clear, the technical skills, determined by mastering a specific programming language and technology, amount to about 10-20% of a software developer’s overall skillset. - All programmers have about 80-90% of the same skills, which do not depend on language! These are the skills to program and develop software, and they are very similar in different programming languages and development technologies. - The more languages and technologies you are proficient in, the faster you will learn new languages and the less you will feel a difference between them. Let us state once again that the choice of programming language (mostly) does not matter - you just need to learn to program. This process starts with coding (by reading this book or enrolling in a Software Engineer program), continues with mastering more complex programming concepts (like data structures, algorithms, OOP, and FP), and includes using fundamental knowledge and skills for software development, software engineering, and computer science. You will need to know a specific programming language, program libraries (APIs), frameworks, and software technologies (front-end UI technologies, back-end technologies, ORM technologies, etc.) once you start working on a software project.
OPCFW_CODE
[Date Prev][Date Next][Thread Prev][Thread Next] - Subject: Re: Upstream is not the last word (was Re: [ANN] Lua 5.1.5 (rc1) now available) - From: Ross Bencina <rossb-lists@...> - Date: Tue, 14 Feb 2012 21:01:33 +1100 On 14/02/2012 7:39 PM, Sean Conner wrote: It was thus said that the Great Ross Bencina once stated: Should I laugh or cry? > I'm going to attempt an answer, from my perspective: > Use case #1: application development language [ snip ] > Use case #2: user extension language [ snip ] > For use case #1 above (developing) it probably doesn't matter, people > can do what they like, but for use case #2 (user extension) I really > don't want to wear the "object system and language library designer" hat > and be held accountable by my users of how such basic facilities work -- > I'm more than happy to do it the "standard base way". > Does that make sense? Not really. I don't see any real difference between #1 and #2, so I'm not even sure what you are asking. I would put your use in case #1: you control the development and runtime environment, the whole code base, *and* the developers, *and* the users. You have captive users too -- you can treat them any way you like. Case #2 is like, well I can't think of a Lua example, but let's say, it's like "Visual Basic for Applications". You give the user an App, and it has a scripting environment where they can write extensions. The user will write code. In this case you're providing a programming environment to the user -- either it's plain Lua or it comes with some "batteries." Even though I don't expect "batteries" with Lua, end-users sure will -- and it really shouldn't be a nightmare on the scale of this thread for an embedder to provide "the standard batteries". All I'm saying is that for cases where batteries are included, they should be the same batteries (or at least compatible batteries), and it should be easy to find and package them. Otherwise App A embedding "Lua" could use it in a completely different and incompatible way to App B. That's fine if there's a good reason for it, but it's a big headache if it's just becausefunction names or core idioms aren't standardised. Or the embedder (like me) wasn't a Lua guru, and didn't really know what the right batteries looked like. > Also, I'm not a big proponent of object > oriented programming, and I find what Lua provides just fine > for my own needs (I stay as far away from C++ as possible), > so I don't quite understand this whole obsession over objects. > From what I can see, Lua support of "objects" is fine. Fair enough. The same would apply if we were discussing functional programming. There are a standard set of functions in that style too (e.g. map() ) > So for my use case, it's important to be able to statically > compile modules into the program. It's less important to > have a "base library" or "base object system" because what > Lua provides is enough (or rather, I have to supply quite a bit, > but it's all custom coding anyway). Do you think every line of what you supply needs to be custom? Are you really using absolutely no standard utility functions? no reusable abstractions? generic functions? If not, then colour me surprised. I had to write a bunch of abstractions just to get started. Sure, they were light weight and simple, but they were still necessary to make the code My assertion is that if there is no "base library" then every Lua embedder has to concoct their own base library. Which is fine for custom jobs, but not so easy to manage when the whole thing is then published as an open programming system for other end-user-developers to use. Every such publisher becomes their own Cardinal (to use the previous blessing analogy) -- I have enough jobs already, I don't want to be a Lua language Cardinal, I just want to publish something with a "standard base" that's usable.. whether the standard base is object-oriented or functional is less important that it is usable by end users without me or end users writing a bunch of expected "standard" infrastructure.
OPCFW_CODE
How to save List<Point> into preferences and get List<Point> from preferences in flutter? Error while using json_serializable json_serializable:json_serializable on .../sign_point_model.dart: Error running JsonSerializableGenerator Could not generate fromJson code for valList because of type Point<num>. None of the provided TypeHelper instances support the defined type. json_serializable doesn't know how to convert a Point into JSON. Since you know it's just a pair of nums you could easily convert the list yourself. import 'dart:convert'; void main() async { var points = [ Point(Offset(123, 456), PointType.tap), Point(Offset(3.14159, 3.16227), PointType.move), ]; var simplified = points.map((e) => [e.offset.dx, e.offset.dy, e.type.index]).toList(); String j = json.encode(simplified); print(j); var decoded = json.decode(j) as List; var asPoints = decoded .map((e) => Point(Offset(e[0], e[1]), PointType.values[e[2]])) .toList(); print(asPoints); } Unhandled Exception: Converting object to an encodable object failed: Instance of 'Offset' List pointList = _signatureCanvas.exportPoints(); var simplified = pointList.map((e) => [e.offset, e.type]).toList(); String jsonString = jsonEncode(simplified); print(jsonString); e.offset is apparently an Offset, not a basic type. What is your Point? You'll need to encode its values too. enum PointType { tap, move } class Point { Offset offset; PointType type; Point(this.offset, this.type); } // Offset(dx, dy) from geometry.dart class Ah, that's a different Point. Try var simplified = points.map((e) => [e.offset.dx, e.offset.dy, e.type.index]).toList(); and var asPoints = decoded.map((e) => Point(Offset(e[0], e[1]), PointType.values[e[3]])).toList(); Please elaborate about this line. var asPoints = decoded.map((e) => Point(Offset(e[0], e[1]), PointType.values[e[3]])).toList(); How to use this?? var simplified = _points .map((e) => [e.offset.dx, e.offset.dy, e.type.index]) .toList(); var asPoints = jsonDecode("source???") .map((e) => Point(Offset(e[0], e[1]), PointType.values[e[3]])) .toList(); Replace the two similar lines in the answer, with those two new lines. Updated the answer to show complete example of encoding and decoding. var simplified return list. So, when decoding what "stringSource??" should I use? I didn't get it. Please elaborate. The string you got by encoding the list, which you earlier saved to shared preferences, and have now retrieved from there. Error when decoding: var asPoints = jsonDecode(j) .map((e) => Point(Offset(e[0], e[1]), PointType.values[e[3]])) .toList(); print(asPoints); DECODING: var asPoints = jsonDecode(j) .map((e) => Point(Offset(e[0], e[1]), PointType.values[e[2]])) .toList();
STACK_EXCHANGE
Came across the beautiful, inspiring poem ‘Invictus’ by William Henley today while reading ‘The Warrior Elite : The Forging of Seal Class 228’ a book about Navy SEAL training - I highly recommend it: “Out of the night that covers me, Black as the pit from pole to pole, I thank whatever gods may be For my unconquerable soul. In the fell clutch of circumstance I have not winced nor cried aloud. Under the bludgeonings of chance My head is bloody, but unbowed. Beyond this place of wrath and tears Looms but the Horror of the shade, And yet the menace of the years Finds and shall find me unafraid. It matters not how strait the gate, How charged with punishments the scroll, I am the master of my fate: I am the captain of my soul.” I have really enjoyed working at Microsoft in the Windows Mobile Multimedia Group as a Program Manager for the past 15 months. And I’d do it all over again in a heart beat. Windows Mobile was my favorite product group at Microsoft and as a former intern, I was able to specify this is where I wanted to end up. However, you don’t get to decide which team you end up on. Landing on the Multimedia Team was an awesome experience. What’s cooler than working on the multimedia experience on mobile phones? I got to work on interesting features that I can’t really talk about in this blog post. Suffice it to say, it was a good experience. I left Microsoft on January 4th, to work on emptyspaceads, a company I founded and have been working full time on for the last month… and one that I feel has huge potential. More on that later… but if you’re impatient check out the emptyspaceads blog. One last thing: if you’re an entrepreneur and you’re broke (like I was in October of 2006 when I started), Microsoft is a pretty great place to be. You can learn new things, build relationships with smart people, and most importantly have time to work on your venture on your own time. I feel start-ups are great, but it’s hard to pursue your own projects while working at one. Microsoft has reached the point in its existence that its employees (in most product groups) don’t work 100 hours/week, freeing you up to pursue your projects. Recently, I’ve been facing what I thought to be a big problem. I wasn’t getting as much out of my job as I could be… and it was really bumming me out. I wrote the problem off as something I had to deal with. It wasn’t until I went on a long bike ride to Seward Park and back that I figured it out. I remembered a Stanford Entrepreneurial Podcast from a professor who taught entrepreneurial classes there. So I’ve turned my problem into a learning opportunity. Here is my execution plan: - Pick key components of the software business that I want to learn about - Prioritize these areas - Spend one month on each of these areas (priority order) - During the month, learn everything I can. Seek out, meet, and learn from gurus in the area. Read articles and books if needed (do I even have time? :p). - Make the most of this problem… I mean opportunity I want to record the lessons I’ve learned so that I don’t forget them. I also want to make sure others can benefit from my mistakes. Be ready to work tirelessly to assure the site doesn’t cave under pressure - It takes more effort than you think. Will worked very hard on getting queuing working. Even so, we had thousands of dropped calls. You get ONE chance to wow users with your product. If it doesn’t work the first time, they’re not coming back - ever. Monitor incoming links - Use software like Google Analytics to monitor incoming links. Know where your traffic is coming from by the hour. Do damage control early & often - Be prepared to respond on all other blogs and places that mention your product. Thank the reviewer for their time and the readers for reading the story about you. Y ou should be thrilled! Then roll up your sleeves and get ready to handle a bunch of negative comments. Address each one if possible and invite others to come back to your personal/company blog to continue the dialog Create many ways of providing feedback - Having a blog to link to, a contact form, and an email address is only a start. Consider having a phone number and a message board. Your web business lives and dies on customer feedback, so keep them happy. Create ways for your users to come back - We had no bookmarking feature for our site relying on users to manually bookmark our site. Big mistake. We lost out on thousands of visitors because we didn’t offer these features earlier. Ideally have a strategy of monetizing the influx of traffic - the first wave of traffic pretty much caught us unexpected. Before I go telling anyone else about this service I want to have a way we can monetize all the calls we’re making. Wowsers! Thanks! A life goal achieved. Thanks for all of your support! Please let me know how I can improve this tool! **UPDATE: **Uncov Review (the Anti-TechCrunch) Hi, I'm David, a 6'6" entrepreneurial eagle scout from Kansas. I write about my entrepreneurial journies. You should stick around. Here's a historical view of my trip below. You can also view a full screen version.
OPCFW_CODE
When you look at the backend landscape, currently there are many popular languages - I would say Golang and Python are the most popular, with NodeJS/TypeScript towards the top of the list especially when the role requires some full-stack work. I am not going to argue against Golang today - it is a great language with many years of hard work to build out its performance, syntax, tooling, community and many more assets. The part I want to look at today is whether there is much value in migrating from TypeScript to Golang for purely cost reasons. I want to answer what kind of savings would it offer to a company who uses many of AWS products?? We will use an example company so we have something to compare against. They have a website, native mobile apps and TV app, all powered by an API run entirely on AWS. The surve to tens of thousands of users which produces millions of requests. A summary of the tech stack includes - Fargate (via ECS and EKS) - for the API - EC2 - for the website - Serverless Lambdas - for processing user input (which is collected very frequently) into features for different platforms as well as for storage purposes They also use most other AWS products such as - RDS, S3, Cloudwatch, Cloudfront, API Gateway, ElastiCache, Kinesis and more. So its February and we check out the bill for the first month of the Year, the costings look like this, in order of highest price. - RDS - £3k - S3 - £2.5k - Cloudwatch - £2k - Cloudfront - £1.5k - EC2 - £2k - Fargate - £1k - API Gateway - £1k - ElastiCache - £700 - Lambda - £150 - Kinesis - £70 The total bill came to £13,920. Here you can see the % of products of the entire bill. So many of those services it would not matter if we were using Golang, those services are listed below - API Gateway That leaves just below: That means the maximum saving from the bill is £3,150, that is 22% from the overall bill. That is pretty good it's just under a quarter. However we still need those services, they will just be run using Golang rather than NodeJS. So now let's actually dig into what changes with those if we swap to Golang. Here is a link to the pricing pages to review my comments below yourselves: We are paying per "on-demand instance hour", which means we pay for the compute capacity we used from the time the instance launched until it was terminated or stopped. Let us assume Golang is twice as fast as NodeJS - many different benchmarks do support a similar theory. Of the £2k bill £1.5k is the "on-demand per instance hour" (the rest is related to NatGateway), so that is brought down to £750. So the total EC2 bill is £1,250. Down from £2k. Fargate pricing is very similar to EC2 on-demand model, in that we pay for the resources used from time the pod starts until it's terminated. It applies for memory, cpu and storage all per hour. Of the £1000 bill £800 is for the hourly vCPU charges. Similar to above we assume Golang to be twice as fast, we reduce £800 to £400 bringing the overall Fargate bill to £600. The last application type is a serverless lambda. With lambdas we typically pay for: - Invoke numbers In general it seems duration costs are higher than invoke numbers. CompanyX currently only pays £160, £140 of which is for compute time, the other £20 is for request count. We will need the same number of invoke numbers, as the same number of clients will be requesting data, but the latency is lower. So by speeding up our compute time we can possibly halve the cost to £70. Bringing the total for Lambda to £90 a month. Above produces to following savings: - EC2 - £750 - Fargate - £400 - Lambda - £70 That is a total saving of £1,220. That is 8% from our overall total bill of £13,920. After applying the saving, here you can see the % of products of the entire bill. So that's it - for CompanyX they will save £1,220 a month - which will bring their yearly bill down from £167,040 to £152,400. So £15,000 they can spend on something else. From my experience to most companies, including startups, £15,000 is pretty small. Considering the amount of developer time and effort which will have to be invested in order to learn and implement Golang, it does not seem worth it purely for cost reasons. However as I said at the start, if the reasons are more than cost than it definitely can reap long-term benefits. Please do remember this is just a numbers exercise, but I have tried to base it on a real scenario. Thanks for reading my article - I hope you found it useful.
OPCFW_CODE
projectcalico.org/IPv4Address annotation pointing to wrong node's IP CIDR On Kubernetes clusters > 1.20, calico-node fails because of this reason. When it occurs, the logs from this calico-node (running on node2 in example) looks like: startup/startup.go 411: Determined node name: node2 startup/startup.go 103: Starting node node2 with version v3.18.1 ... startup/reachaddr.go 57: Checking CIDR CIDR="<IP_ADDRESS>/16" startup/reachaddr.go 59: Found matching interface CIDR CIDR="<IP_ADDRESS>/16" startup/startup.go 808: Using autodetected IPv4 address <IP_ADDRESS>/16, detected by connecting to <IP_ADDRESS> startup/startup.go 585: Node IPv4 changed, will check for conflicts startup/startup.go 1128: Calico node 'node1' is already using the IPv4 address <IP_ADDRESS>. <----- problem startup/startup.go 347: Clearing out-of-date IPv4 address from this node IP="<IP_ADDRESS>/16" startup/startup.go 1340: Terminating If you look at the annotations on node1, it will show projectcalico.org/IPv4Address: <IP_ADDRESS>/16. However, <IP_ADDRESS> is not node1's IP, it is node2's IP. Thus node1's IP annotation is incorrect. Expected Behavior All nodes to receive an annotation that matches up with the node's IP. Example: If a the internal IP of a node is <IP_ADDRESS>, than it's the annotation it receives from calico-node should be projectcalico.org/IPv4Address: <IP_ADDRESS>/16. Current Behavior A node will receive an annotation that does not match up with it's IP. Example: A node's internal IP may be <IP_ADDRESS>, but the annotation it receives from calico-node will be projectcalico.org/IPv4Address: <IP_ADDRESS>/16. Steps to Reproduce (for bugs) The issue seems to be intermediate with low reproducibility. However, all the times this has happened has been from clusters coming from an upgrade. Particularly, clusters upgrading from a Kubernetes version below 1.20 to above 1.20, which introduces the tigera-operator for managing the installation of Calico. Context It seems likely that this comes from a race during an upgrade, possibly similar to https://github.com/projectcalico/calico/issues/4525. Your Environment Calico version: v3.18.1 (from tigera-operator v1.15.1) Orchestrator version: Upgraded to 1.20.x from previous version Operating System and version: Ubuntu 18 Mitigation If anyone comes across this problem in their cluster, here are the mitigation steps: Identify which node has the incorrect annotation by doing kubectl log <crashing calico-node pod> -n calico-system and look for something like: startup/startup.go 1128: Calico node 'node1' is already using the IPv4 address <IP of different node> Check node1 and see if it has annotation projectcalico.org/IPv4Address that does not match up with it's IP. If so, we must the annotation then restart the calico-node running on that node so it gets the receives the correct annotation: kubectl annotate node <node1> projectcalico.org/IPv4Address= --overwrite kubectl delete pod <running calico-node on node1> -n calico-system At this point when the failed calico-node restarts it will come up correctly, but if your in a hurry you can manually restart it 3. kubectl delete pod <failed calico-node> -n calico-system All the projectcalico.org/IPv4Address node annotations should now match up with their respective IPs, which will allow calico-nodes to run. Thank you for this. Really helpful as we experienced this exact issue upgrading kubernetes on aks from 1.19.3 to 1.20.5. Several of our production nodes had skewed labels and Ip's, while our dev environment was unaffected. Here's the Calico Users chat between @caseydavenport and @mattstam on this: https://calicousers.slack.com/archives/CPTH1KS00/p1621288636253300 (in case there's any clues in there) We experienced the same issue today upgrading AKS from 1.18.17, to 1.19.11 and directly to 1.20.7 once the cluster pods were all stable. The 1.19.11 calico pods didn't have this issue. I think the issue may be caused by custom annotation copying during the upgrade . @mattstam DreamRivulet has a verty good guess. We should probably not copy any annotations (or labels) from the calico namespace during upgrade. We already have an exception to not copy zone annotations so this should be an easy fix. @lmm I believe you were looking into this? @lwr20 yes this issue is on my plate, I'm hoping to start looking at this today or tomorrow. @lmm there is a good chance this is aks upgrade related. (I can speak for azure/aks). During an aks upgrade for legacy resaons we preserve node labels by copying them between nodes during the upgrade. We have a blacklist of labels/annotations that I'm goign to expand to containe everything under *.projectcalico.org. Should take another 2+ weeks to hit all regions though. If you've seen this on none AKS clusters then what I say probably doesn't apply. Fix is in but will take 2 weeks before it is in all AKS regions. If you've seen this on none AKS clusters then what I say probably doesn't apply. @paulgmiller we haven't had reports of this issue elsewhere (there is a Rancher that is similar but has a different cause). Thanks for the update! @paulgmiller Could you confirm if fix is deployed in all AKS regions now? Sounds like this one is fixed upstream? Going to close for now but please shout if I should re-open. Yep this is fixed everywhere
GITHUB_ARCHIVE
How do I mock out file writing to multiple files in python I'm trying to test a function in which one call results in multiple files being written: def pull_files(output_files=[]): for output_file in output_files: content = get_content_from_server(output_file) with open('/output/' + output_file, "wb") as code: code.write(content) I want my test to check that each call was made to open as expected, and that the content was written: def test_case(self): pull_files("file1.txt", "file2.txt") # Assert open("file1.txt", "wb") was called # Assert "file 1 content" was written to "file1.txt" # Assert open("file2.txt", "wb") was called # Assert "file 2 content" was written to "file2.txt" I've seen an example of handling two files here: Python mock builtin 'open' in a class using two different files But I can't wrap my head around how to track what is actually written to them. You would have to clear the folder you were writing to, write to the files, then open the files and read them, then compare what was read to what was expected. Another option (if this is being used in a configuration sense) is a round trip test where you load from a configuration, then write the configuration back to another file to test that they are the same (essentially that you are saving properly.) Here's an example mocking open and returning a StringIO as context: from io import StringIO def my_function(*fns): for i, fn in enumerate(fns): with open(fn, "wt") as fp: fp.write("content %d" % i) string_io_one = StringIO() string_io_two = StringIO() with mock.patch("%s.open" % __name__) as open_mock: open_mock.return_value.__enter__.side_effect = [string_io_one, string_io_two] my_function("file1.txt", "file2.txt") assert open_mock.called_with("file1.txt") string_io_one.seek(0) assert string_io_one.read() == "content 0" assert open_mock.called_with("file2.txt") string_io_two.seek(0) assert string_io_two.read() == "content 1" Similarly you could mock out "regular" use of open (without a context manager). Edits made: Changed to cover the test cases of the original question. First, you should never use a mutable object as your default argument to a function, which is an anti-pattern. You should change your function signature to def pull_files(output_files=()) instead. Then, to your question, you can do a os.chdir to /tmp/ and make a temporary directory, then write files in the temporary folder instead. Don't forget to change your working directory back to what it was after the test. Another solution is to modify your function slightly so that you are not prepending a prefix ('/output/' + output_file). This way, you can pass an io.BytesIO object instead of a path, which will let you modify the contents in-memory. def pull_files(output_files = None): surely? @Dan yeah I just edited the answer and changed it to a tuple instead. None also works but if OP really doesn't want to do an extra condition check then a tuple might be better, since it's also an iterable. Defaulting to None with the extra if condition to set the default is a very standard python pattern. I don't see any value in deviating from it. It is the pattern recommended in the link you've posted. @Dan I agree it's a standard practice. The only reason that I changed it to a tuple is that it will not break OP's code if they simply copy-paste my suggestion.
STACK_EXCHANGE
Written by Neethu Elizabeth Simon and Samantha Coyle Smart cities need smart security solutions to keep assets and the public safe and secure. Surveillance cameras are installed across various parts of smart cities to ensure the safety of everyone as well as their surrounding environment. With the advent of the Internet of Things (IoT) combined with advancements in computer vision (CV) and AI/ML, security-as-a-service solutions are becoming more important in enhancing safety in a smart city. This paper proposes a CV-based Security-as-a-Service Smart City Solution using AI/ML. This solution provides a framework and processing pipeline for deploying an AI-assisted, multi-camera Smart City Solution for monitoring vehicular and walkway traffic. Analytics dashboard displays camera installation locations on the smart city map such that data analytics captured across the city can improve situational awareness. They can also be leveraged to provide notifications and safety-related alerts. The proposed solution illustrates a web user-interface (UI) architecture that utilizes Open-Source software tools for data visualization to improve situational awareness. It covers various Grafana dashboards and plugins used along with metrics captured via AI model inferencing to provide insights about their surroundings. We also discuss privacy and security considerations concerning the use of security cameras while developing a CV-based AI/ML data visualization solution for situational awareness. Design & Architecture This implementation enables an AI-assisted reference design achieving situational awareness and property security and management data analytics. It focuses on user interface pieces leveraging Open-Source software tooling that interact with several Go microservice APIs to search camera metadata for registered cameras, attain a list of indexed models, layers, and labels, and to find video sessions assigned to a particular label by an inference model, and a video service providing endpoints to serve playlists and video segment data. Go microservices interface with the Open-Source Postgres database for data storage and retrieval. Figure 1 shows the frontend architecture leveraging a containerized microservice-based architecture for storing and displaying Smart City Situational Awareness solution data analytics. The diagram displays the flow of data with Go APIs interfacing with data storage, and Web UI framework built to display Smart City data to include Angular UI framework and Grafana dashboards. The data stored includes: - Camera metadata: location, RTSP URI, direction facing, description, ML pipeline to run for camera feed - Binary data for video frames using Binary Columns & PostgreSQL’s declarative partitioning, to allow easier lookups - Inference results from applying ML models using the OpenVINO Inference Engine Figure 1: Solution Architecture. Go microservices are primarily used by the Angular Web UI framework to access APIs and display data for live camera streaming, interfacing with one click to Grafana through a navigation bar and display previous video recordings using VideoJS. VideoJS is an HTML5 media player framework to show video in a browser served via a typical HTTP server. Figures 2-6 demonstrate key features of the UI leveraged for a Smart City Data Analytics Solution including the home page, camera configuration, visualization of live camera, and stored video stream. By having a single UI, it enables a seamless flow to navigate analytics from adding cameras, to visualizing inference results in Figure. 7. Figure 2: Web UI – Home Page. Figure 3: Web UI – Add Camera. Figure 4: Web UI – View Cameras. Figure 5: Web UI – View Camera Live Feeds. Figure 6: View Available Cameras with Feed. Figure 7: Web UI – View Camera Feed Inference Results. Grafana is a data visualizations, insights, and metrics Open Source tool leveraged to further enhance the data analytics for this Smart City solution. It is configured with a world map plugin to visualize edge device (i.e. camera) locations and contains custom queries leveraging Postgres solution storage and Go microservices APIs. It uses read-only database credentials where credentials populate data source configuration files. Upon startup, a default dashboard is provisioned for camera location visualization. A user may navigate to the default dashboard, shown in Figure 8, by selecting the Grafana tab in Angular UI top navigation bar (Figure 2). Figure 8: Grafana – Default Dashboard. Figure 9: Grafana – Edge Device Locations on World Map. Figures 10 and 11 highlight the value Grafana custom dashboards enable. Dashboards demonstrate inference results from applying person, vehicle, and bicycle detection, and age and gender recognition models from OpenVINO Model Zoo to video data to derive meaning of video data. This shows the locale-based inference results and expand with additional dashboards and dashboard playlists. Figure 10: Grafana - Inference Analytics. Figure 11: Grafana – More Inference Analytics. By enabling data analytics through queries to Postgres on configured cameras, inference results as well as camera data, it is possible to then enable notifications through alert rules and notifications capabilities within Grafana. Alerting may leverage labels like security levels, or within certain namespaces of solution to then kick-off notification policies through Slack, email, or so on. For example, in this Smart City Situational Awareness Solution, alerting may be enabled when someone is in a place where they are not allowed. Security personnel could use this notification to better protect the area in real-time and take appropriate action thanks to data analytics from this Smart City data analytics solution. Security & Privacy Considerations The following considerations were made while designing this secure solution involving potentially identifiable information: - Ensured files are inaccessible to unauthorized users. - Application service authentication using authorization policies & tokens. - TLS encryption for data transferred to/from HTTP APIs and UI. - Encrypted RTSP URIs which may contain sensitive information. - Vault was used to store sensitive data like username & passwords, certificates and private keys, authorization policies and tokens used by application services, as well as generate dynamic, temporary database credentials by following their baseline production hardening recommendations. - Sensitive information is hidden by default in the UI. - Mask facial details in video for privacy. Designed as a reference implementation, this solution leverages Docker containers to wrap services with dependencies and deploys solution through Docker compose. It is simple to start services with “docker compose up.” This allows for an easily deployed Smart City data analytics solution with preconfigured dashboards and data sources. However, in a production-grade environment, one might use an elaborate orchestration solution to inject security credentials and handle startup/shutdown of media pipelines in response to configuration changes. This solution leverages Open-Source tooling, thereby eliminating the operational cost associated with deployment; it would be easy to deploy certain aspects of the modular microservice-based solution to Cloud in future larger-deployment-based scenarios. Hence, a highly customizable data analytics solution is enabled for smart city situational awareness at the Edge. - Grafana: The Open Observability Platform. Grafana Labs. https://grafana.com/ - Build fast, reliable, and efficient software at scale. Go. https://go.dev/ - Group, P. S. Q. L. G. D. (2022, August 2). PostgreSQL. https://www.postgresql.org/ - Angular. https://angular.io/ - Inference Engine Developer Guide - OpenVINO™ toolkit. OpenVINO. https://docs.openvino.ai/2020.2/_docs_IE_DG_Deep_Learning_Inference_Engine_DevGuide.html This article was edited by Bernard Fong.
OPCFW_CODE
WHY ADDING RAM IS A GOOD THING A very common bit of advice we all get from computer people is that if we want to speed up our computer, we should add more RAM, or Random Access Memory. This is good advice, and is usually true. But how does adding RAM achieve this? THE RAM'S FUNCTION When you start up your computer, it gets its initial start up instructions from the ROM, or Read Only Memory, which sets up the basic configuration of your PC and enables it to start reading your Hard Drive. After that, all the programs are read from the hard drive, and loaded into your RAM, that is, into the computer's "memory." Your Windows or Mac operating system, your word processor, your internet browser - all these things are loaded into your RAM. Now, if your RAM is small, say 128 Megabytes, you start running into problems. The operating systems, like windows XP, will use up around 120 Megabytes. Then your Office suite of programs will use up another 100MB or so. But wait! Thats 128 + 120 = 248 Megabytes, and the computer is still able to load more programs! How is that possible? When your RAM gets full, what the computer does is that it does page swopping. This is when the computer swops blocks of memory, that aren't being used, out of memory and writes them back to the hard drive, into a special area of the hard drive called the Paging File, and it then puts the needed data into this place in the RAM that has been "cleared out." But here's the rub: RAM is very fast to access, but the hard drive is very (relatively) slow. So when you are running 300 or 400 Megabytes of programs in a 128 Megabyte RAM space, your poor computer is going to be frantically swopping blocks of data between your slow hard drive and your fast RAM. When you're not using your mouse, it swops all the data connected with your mouse out of RAM. When you touch your mouse again, it rushes off to get this data off the hard drive again. So the end result that you experience, is that the computer seems sluggish and unresponsive. Putting more RAM into your computer means that your PC no longer has to swop data between your hard drive and your computer, so all the activity takes place at the much faster RAM speed. RAM, being a solid state device (ie no moving parts whatsoever), is many hundreds of times faster than a hard drive, which has a spinning disk and magnetic pickup heads, that have to physically move to where the data is on the disk, and read it in. THE RING ANALOGY. Think of it this way: Let's say you like to wear rings on your fingers. When your fingers are full, if you want to wear new rings, you'll have to take off some of the ones you're wearing, and put them back in the safe, before you can don the new ones. Getting more RAM is like getting 20 or 40 more fingers to wear your rings on - so you can wear them all at once! So the more RAM you have, the more programs and graphics you can have running at the same time without slowing your computer down. HOW MUCH DO YOU NEED? 1 Gigabyte (1,000 Megabytes) will be good enough if your budget is tight. 2 GB is better, especially if you are running Windows Vista, watching lots of movies or doing graphic design, and 4GB or more will keep you humming along no matter what you're doing on your PC.
OPCFW_CODE
ORFEUS - Scenarios The ORFEUS application supports several message exchange scenarios: CTD Scenario (old one, to be abandoned): - Simple forwarding of consignment note data - Consignment note data is forwarded simultaneously to all participating railway undertakings. - In addition a paper consignment note has to accompany the transport. - This is the old traditional scenario, which is being replaced with ECTD or PCN/ECN scenarios. - The forwarding railway company collects data about an international transport (majority of the CIM consignment or CUV wagon note information) - Then its national system (NIS) sends the data to the CDS using the Create Transport Dossier (CTD) message. - A copy of the CTD message is sent by the CDS to all other railway undertakings involved in the transport. The CDS covers also filter function so the distribution rules may be adjusted. - If any change occurs with the consignment, railway undertaking which sent the CTD can send Update Transport Dossier (UTD) message. A copy of the UTD message is sent by the CDS to all other railway undertakings involved in the transport. ECTD Scenario : - Scenario for message exchange is similar to the present CTD one, but it uses the new ECN message format. - It enables to transmit complete CIM/CUV note information content using the ECN format. - Only dossier creating carrier is allowed to communicate updates of the transport information) because it is not yet possible to determine carrier who is in the custody of the goods). PCN Scenario : - Above the ECTD scenario, also the handover messages are added. So the central system is able to identify currently responsible carrier. - Therefore the carrier who is in the custody of the goods is allowed to apply updates to the transport information. - The information runs in parallel with paper consignment notes. Paper has preference over the information. ECN Scenario : - Instead of a paper consignment note, an electronic consignment note, handled by the CDS of Raildata, accompanies the transport. - Messages are exchanged similar to the PCN scenario, but the paper consignment notes are skipped. - Therefore the information (messages) is the only and legally binding data source about consignments and their hand-overs. Currently RAILDATA works on two new concepts: - Simplified Scenario, which should enable parallel use of paper and electronic consignment notes, but in less complex way than present ECTD/PCN/ECN. Several simplifications are foreseen, i.e. confirmation messages ACK and NACK will be omitted. - Support for Purchase & Sale concept - messages for for subsitute carriers (in CIM role 2). This could also cover the Consignment order message (COM) requirement of TSI TAF. Specifications are under discussion still.
OPCFW_CODE
// // ScreenOrientationTools.swift // PLPlayerDemo // // Created by 卢卓桓 on 2019/8/12. // Copyright © 2019 zhineng. All rights reserved. // import UIKit class ScreenOrientationTools { // 强制旋转横屏 class func forceOrientationLandscape(view: UIView) { let appdelegate:AppDelegate = UIApplication.shared.delegate as! AppDelegate appdelegate.isForceLandscape = true appdelegate.isForcePortrait = false appdelegate.isForceAllDerictions = false _ = appdelegate.application(UIApplication.shared, supportedInterfaceOrientationsFor: view.window) let oriention = UIInterfaceOrientation.landscapeRight // 设置屏幕为横屏 UIDevice.current.setValue(oriention.rawValue, forKey: "orientation") UIViewController.attemptRotationToDeviceOrientation() } // 强制旋转竖屏 class func forceOrientationPortrait(view: UIView) { let appdelegate:AppDelegate = UIApplication.shared.delegate as! AppDelegate appdelegate.isForceLandscape = false appdelegate.isForcePortrait = true appdelegate.isForceAllDerictions = false _ = appdelegate.application(UIApplication.shared, supportedInterfaceOrientationsFor: view.window) let oriention = UIInterfaceOrientation.portrait // 设置屏幕为竖屏 UIDevice.current.setValue(oriention.rawValue, forKey: "orientation") UIViewController.attemptRotationToDeviceOrientation() } }
STACK_EDU
About this Workshop Lifting off a project with agile chartering provides everyone with a common understanding and co-ownership from the outset. If a team doesn’t have a common understanding about the purpose of what you are about to spend however many months on creating, or alignment on how you will be working together, or the context of where the product fits in the bigger scheme of things, then you are going to spend a lot of time building stuff that won’t be right. Project chartering is as old as project management. It’s a pattern that has been used successfully for projects whether they are agile or not. Some project leads talk at you with a boring slide deck and then, thank you very much, off you go and write some code. Afterwards, you find that everyone has a different interpretation of what has been presented during the meeting. Moreover, many agile projects often don’t have a charter at all. Too many teams are using retrospectives to repair misunderstandings. Effective team chartering can avert misunderstandings from the start and set the right ground for your team to succeed. This session begins with an overview of the 3 dimensions of agile chartering that were introduced in the book Liftoff by Diana Larson and Ainsey Neis; purpose for inspiration, alignment for team collaboration and context for the project dynamics. For the remainder of the session, participants will take part in a hands-on workshop that follows the structure of a day-long agile chartering liftoff meeting, helping them to put into practice what they just learned. The workshop is designed to follow the normal flow of an agile chartering meeting, but a much shorter version in order to fit into the canvas of a conference, so we've called it ‘Agile Chartering Express’. It explores in detail the 3 dimensions of agile chartering. Liftoff with agile chartering is for everyone, bringing together business and technical people to create an initial project charter. And overall, the important thing to remember is that it is not about the charter, it’s about the chartering process. It’s about living it. By the end of the workshop you will understand how important agile chartering is to the successful outcome of your project. About the Speakers Lynne is Head of the Consultancy and Web Technologies Competence Unit at Zuhlke Engineering UK. She was born in Seattle, Washington, USA, was an 80’s model in Europe, a fashion label owner and hotelier. She had a software engineering education in the 90s in London, and is an agile evangelist and eco warrior. Senior UX and Innovation Consultant at Zuhlke Engineering. Specialising in discovery and product strategy Born in Poland, graduated in Biotechnology, experienced product designer and UX consultant, passionate about delivering great products and innovation.
OPCFW_CODE
WHAT IS A MARK UP LANGUAGE? In computer text processing, a markup language is a system for annotating a document in a way that is visually distinguishable from the content. It is used only to format the text, so that when the document is processed for display, the markup language does not appear A markup language is a computer language that uses tags to define elements within a document. It is human-readable, meaning markup files contain standard words, rather than typical programming syntax. While several markup languages exist, the two most popular are HTML and XML. What are the types of markup languages? - HTML – Hypertext Markup Language. - KML – Keyhole Markup Language. - MathML – Mathematical Markup Language. - SGML – Standard Generalized Markup Language. - XHTML – eXtensible Hypertext Markup Language. - XML – eXtensible Markup Language. Why is it called markup language? HTML is called as markup language because, Unlike any other language in software(C, C++ etc) we cant do conditional events through HTML. HTML is used only to present the information in a way we want. By using html you can create tables, labels forms to display your information How do markup languages work? Markup languages are languages used by a computer to annotate a document. These languages are readable by humans, which means that they are usually written using standard words, instead of technical programming language terminology WHAT IS EXTENSIBLE APPLICATION MARK UP LANGUAGE? Extensible Application Markup Language is a declarative XML-based language developed by Microsoft that is used for initializing structured values and objects. It is available under Microsoft’s Open Specification Promise The Extensible Application Markup Language (XAML) is a widely used XML-based language in .NET Framework 3.0 and .NET Framework 4.0 technologies. Basically, this programming language supports the backbone of different systems today, particularly Windows, such as: - Windows Workflow Foundation (WF) - Windows Presentation Foundation (WPF) - Windows Runtime XAML Framework - Windows 10 Mobile - Apps from the Windows Store In XAML usage, an attribute referred to as Xname is popular and utilized everywhere. You might ask – what is Xname? Is it a person? As coders use this element extensively, many are curious about the xname xname net worth. In the coding world, the Xname attribute can identify or introduce any unique element or work as a variable that holds an object reference. When XAML is implemented, a specific Xname turns into a unique name where the underlying code is applied. As represented in text, XAML files become XML files with the .xaml extension. They exist as plain text files that use custom tags in describing the features of your document. Markup Language And Its Uses A markup language involves comprehensible tags, names, and keyboards that make formatting possible in a certain webpage and its information. Markup language has letters and numbers that, when combined logically, forms phrases and sentences with unique meanings. In markup language, tags and keywords serve as the letters, numbers, and symbols. When used to define a text, they change how elements are displayed in a web browser. Today’s most popular markup languages are the Hypertext Markup Language (HTML) and Extensible Markup Language (XML). While these markup languages have their own qualities, take a look at how XML differs from HTML, and why some developers prefer to use it:https://9bd57a8c93d8796e184dfca0e195d5f4.safeframe.googlesyndication.com/safeframe/1-0-38/html/container.html - XML focuses on content than format. - Unlike HTML, XML has strict standards in closing tags. - HTML tags are limited, while you can be more extensible and expressive with XML. - XML markup code is relatively easier to understand. - As long as your XML is well-structured, you’ll have no issues in reading and writing programs. Additionally, companies can rely on XML in creating their business websites. XML delivers advanced data coding that allows seamless integration of information flows. With one set of XML codes and tags, you can utilize and share them among your company’s databases and systems. XML is also advantageous in online transactions and purchases. Essentially, products and services sold over the web always contain details and specifications, prices, terms and policies, delivery information, and more. Through XML, your business won’t encounter any troubles during these activities. In 2006, Microsoft released a declarative XML-based markup language called the Extensible Application Markup Language (XAML). Is XAML a programming language? Extensible application mark up language is a new descriptive programming language developed by Microsoft to write user interfaces for next-generation managed applications. XAML is the language to build user interfaces for Windows and Mobile applications that use Windows Presentation Foundation (WPF), UWP, and Xamarin Forms Advantages Of XAML As mentioned earlier, XML and its XML-based language highlight content rather than design and formatting. Therefore, you can focus and control more of your application or document’s behavior by working on your codes, then letting the designing professionals make the overall look of your web page. However, take note that this is only possible if you have a clear distinction between your code and markup by integrating the Model View-View Model (MVVM) design pattern. Here are some other advantages of using XAML on your programs: - Data identification is more efficient in XAML documents, allowing a plethora of web applications. - Interlinking multiple documents is easier since they can be added in order into the XAML document. - Simple user interface (UI) designing process. - Easier to design a dynamic UI in XAML. - HTML has limited control and program functionality, whereas XAML provides back-end scripting language to support more functions. XAML has a flexible UI definition that can power the following app platforms: - Windows Presentation Foundation (WPF) - Universal Windows Platform Markup languages have significant contributions to content creation, design, and presentation. Without these elements, creating documents and web pages with unique content is impossible. When planning and developing your website, incorporating XAML is worth considering because of its numerous advantages and applications against other languages.
OPCFW_CODE
feat(xslt): coverage report for SonarQube #877 Proposal for issue #877 I propose to answer this issue by adding a new output to the coverage-report.xsl. The new report generated, (sonar-coverage-report.xml) corresponds to the generic format of SonarQube. Its output path can be configured, by default it is in the same folder as the other coverage reports. There is almost nothing to do! Add an xsl and xspec file to a project. Run the xspec with the -c option. And add the follow code in the pom.xml of a Maven project : <properties> <sonar.sources>src/main/java,src/main/resources</sonar.sources> <sonar.coverageReportPaths>${project.basedir}/src/main/resources/xsl/xspec/sonar-coverage-report.xml</sonar.coverageReportPaths> </properties>. And Sonar will analyze the xsl, like this: Thanks for the contribution. Have you considered an independent reporter stylesheet instead of entangling the new coverage SonarQube reporter stylesheet in the existing coverage HTML reporter stylesheet? CLI and Ant already have their methods to customize the coverage reporter stylesheet: COVERAGE_REPORTER_XSL environment variable and xspec.coverage.reporter.xsl Ant property respectively, as demonstrated in tests: CLI https://github.com/xspec/xspec/blob/2962c7d1ed13153c7cc7883b25fb1e59e4bbef6d/test/xspec.bats#L1770-L1771 Ant https://github.com/xspec/xspec/blob/2962c7d1ed13153c7cc7883b25fb1e59e4bbef6d/test/xspec.bats#L1785-L1790 So if you develop an independent stylesheet, src/reporter/coverage-sonar-report.xsl for example, then you can get SonarQube reports by setting COVERAGE_REPORTER_XSL=[xspec]/src/reporter/coverage-sonar-report.xsl (CLI) or xspec.coverage.reporter.xsl=[xspec]/src/reporter/coverage-sonar-report.xsl (Ant) without affecting the existing coverage-report.xsl. format-xspec-report-folding.xsl would be a good example of this approach. It generates a different form of HTML report without affecting the default HTML reporter stylesheet (format-xspec-report.xsl) while sharing a large portion of the stylesheet code. You can invoke it by setting HTML_REPORTER_XSL=[xspec]/src/reporter/format-xspec-report-folding.xsl (CLI) or xspec.html.reporter.xsl=[xspec]/src/reporter/format-xspec-report-folding.xsl (Ant). And if you go further, please consider not editing unrelated lines like this. Also please have it in mind that this kind of hardcoding would not last: https://github.com/xspec/xspec/blob/d66a3c5d3e289456315e5d7ff198130fda4f5f19/src/reporter/coverage-report.xsl#L84 Hello! Thanks for your feedback, Your comment is very detailed, thank you very much :) I understood the problem of editing theexisting coverage HTML reporter stylesheet. I will follow your advice, and offer you an independent reporter stylesheet. Hello AirQuick, I come back having taken into account your remarks. I suggest an independent style sheet to create the coverage report for SonarQube. However, you advise me against hardcoding. But how can I give a default value to the parameter sonar-coverage-report-url (line 133) ? Thank you for your help and your time. Is it really necessary to output two documents (one for the standard coverage HTML report and one for SonarQube) at the same time? From what I see in the sample screenshot, I guess that the user does not need the standard coverage HTML report when he/she opts to generate the SonarQube report. I think the reporters in general should output only one document, otherwise we'll have to write and maintain dedicated tests. @AirQuick : having both output is important. In a maven project, I have only one configuration, and the same configuration is used when I build on my local laptop and when project is built on Jenkins. And having both output was my initial specification. Both your local laptop and remote Jenkins use xspec-maven-plugin? If so, could xspec-maven-plugin perform the coverage report transformation twice? One with the standard src/coverage-reporter.xsl and one with this new src/reporter/coverage-sonar-report.xsl? That will facilitate merging this pull request greatly, because then we can write and maintain the test for src/reporter/coverage-sonar-report.xsl with minimum effort. Yes, you are right, I could put a tee in the coverage reporter pipeline, and send to both XSL. @VTristan : could you please take @AirQuick advice an make your src/reporter/coverage-sonar-report.xsl produce only one report, the one for Sonar ? Thanks. Then I can take responsibility for providing a facility to test this pull request. (half done by #900) @VTristan Does this stylesheet handle xsl:import (and xsl:include) correctly? Running it with the tutorial demo produced a suspicious result. Can you reduce the code duplication between coverage-report.xsl and coverage-sonar-report.xsl? For that purpose, you can modify coverage-report.xsl a little. Hello AirQuick, (I had a little problem with git, you should only consider the last commit update 13/05/2020 : 05e5770) OK, I modified report.coverage.xsl a bit, as you asked me. Then adapted report.sonar.coverage.xsl. And finally added in the .bat and .sh a line to be able to modify the environment variable COVERAGE_HTML. I hesitated to rename it, for example in "COVERAGE_OUTPUT", but I prefer to leave it to you to find the most suitable name in case the modifications interest you. So just change the environment variables. e.g on windows set "COVERAGE_REPORTER_XSL=%XSPEC_HOME%\src\reporter\coverage-sonar-report.xsl" set "COVERAGE_HTML=%TEST_DIR%\%TARGET_FILE_NAME%-sonar-coverage.xml" Even with Linux or Ant, it shouldn't be a problem. Thanks @VTristan The diff of coverage-report.xsl is very large. Actually the entire file is modified in this pull request, because of these changes: lines endings (CR+LF to LF) indentation line wrapping at different positions Can you please minimize the number of modified lines? Ideally, the diff should be only the lines that you actually mean to change in this implementation. The various points mentioned have been corrected. I hope I haven't forgotten anything! Thank you again for your advice and your patience. Thanks @VTristan Getting better and better. Unfortunately the diff of coverage-report.xsl is still overwhelming, because the line ending is LF. You need to restore coverage-report.xsl to CR+LF. (At some point in the future, we will probably change all the stylesheets to LF. But not yet. So let's stay with the original line ending (CR+LF) for now.) Ah, I understood the opposite. Thank you @AirQuick ! Does this stylesheet handle xsl:import (and xsl:include) correctly? Running it with the tutorial demo produced a suspicious result. Still suspicious. coverage-report.xsl still has too many unrelated changes including line endings. Please rebase the branch of this pull request to the master branch of this repository. The branch of this pull request contains changes that should have been rebased instead.
GITHUB_ARCHIVE
- Bo Jayatilaka (Fermi National Accelerator Lab. (US)) - Nick Smith (Fermi National Accelerator Lab. (US)) - Robert Illingworth (Fermi National Accelerator Lab. (US)) - Eric Vaandering (Fermi National Accelerator Lab. (US)) - Andrew John Norman (Fermi National Accelerator Lab. (US)) The ESCAPE European Union funded project aims at integrating facilities of astronomy, astroparticle and particle physics into a single collaborative cluster or data lake. The data requirements of such data lake are in the exabyte scale and the data should follow the FAIR principles (Findable, Accessible, Interoperable, Re-usable). To fulfill those requirements significant RnD is foreseen with... The DUNE collaboration has been using Rucio since 2018 to transport data to our many European remote storage elements. We currently have 13.8 PB of data under Rucio management at 13 remote storage elements. We present our experience thus far, as well as our future plans to make Rucio our sole file location catalog. We will present our planned data discovery system, and the role of Rucio in... We will describe our plans for using RUCIO within the data management system at the Linac Coherent Light Source (LCLS) at SLAC. An overview of the LCLS data management system will be presented and what role RUCIO will play for cataloging, distributing and archiving of the data files. We are still in the testing phase but plan to use RUCIO in production within the next few month. MSKCC's Computational Oncology group performs prospective and retrospective studies on a number of cancer types with a focus on cancer evolution. The data being collected and managed for research comes from many sources. Broadly, the data may be categorized into molecular, imaging and clinical data types. The studies tend to be cross-sectional and longitudinal. Users require heterogenous... An update on the CMS transition to Rucio, expected to be completed this year, will be given. Results of scale tests, data consistency work, and improvements in the kubernetes infrastructure will be the focus of this talk. The Data Management requirements coming from the EGI and EOSC-Hub user communities have pictured Rucio (together with a Data transfer engine) as one of the possible solutions for their needs. Since the 2nd Rucio workshop a number of enhancements and new developments (in primis the support for OIDC and the kubernetes deployment improvements) have been implemented and they are going towards the... The search for Dark Matter in the XENON experiment at the LNGS laboratory in Italy enters a new phase, XENONnT in 2020. Managed by the University of Chicago, Xenon's Rucio deployment plays a central role in the data management between the collaboration's end points. In preparation for the new phase, there have been notable upgrades in components of the production and analysis pipeline and they... Rucio has evolved as a distributed data management system to be used by scientific communities beyond High Energy Physics. This includes disengaging its core code from a specific file transfer tool. In this talk I will discuss using Globus Online as a file transfer tool with Rucio, the current state of testing and the possibilities for the future in light of NSLSII's data ecosystem
OPCFW_CODE
I have a large project where the customer wishes to use NetgearWAG102 access points with wireless Windows Mobile winCE.net devices. The customer has about 400 stores with about four to five mobile devices per store. Could you please explain the basic principles of creating a WPA2 compliant network in this environment? WPA2 is available in two forms: WPA2-Personal for home and small office use, and WPA2-Enterprise for business use. Given your target application, you should use WPA2-Enterprise for strong, individual device authentication. You will require support WPA2-Enterprise support on your winCE.net devices and Netgear APs, and at least one RADIUS authentication server for 802.1X/EAP authentication. Start with your mobile devices. Determine whether their Wi-Fi interfaces support WPA2-Enterprise; this may require installing driver upgrades. If WPA2 is not supported, use WPA instead. The Windows Mobile operating system supports 802.1X and several EAP types, but you'll need to choose an EAP type that meets your security needs and is supported by your devices as well. For example, Protected EAP (PEAP) would require configuring each mobile device with a username and password, while EAP-TLS would require installing a digital certificate on each device. If your mobile devices simply cannot support 802.1X, you may need to resort to WPA2-Personal in conjunction with MAC ACLs and a long, random PreShared Key. Next, install, and configure a RADIUS authentication server to match the EAP type used by your mobile devices. You will need to create an account for each mobile device, either on the RADIUS server itself, or in a user database (e.g., Windows AD, LDAP database) that interfaces with your RADIUS server. The RADIUS server will be consulted each time a mobile device connects to the network, so give some consideration to where the RADIUS server should be placed, and if you really need more than one server for redundancy or performance. Depending on the EAP type, you will probably need to configure each authentication server with its own digital certificate. The easiest component to configure will be your Netgear APs. In a WPA2-Enterprise network, APs serve as the middle man, relaying access requests from wireless clients to a RADIUS authentication server. WAG102 APs support WPA2-Enterprise, so just configure them with your authentication server's IP address and RADIUS shared secret. Beware that RADIUS protocol can expose sensitive information, so communication between APs and your authentication server(s) should be protected -- for example, using a site to site VPN to connect stores to a centrally-located server. To learn more, read our Wireless LAN Security Lunchtime Learning Series tip about WPA2. Dig Deeper on Wireless LAN (WLAN) Related Q&A from Lisa Phifer As the remote workforce increases, network managers and users might opt to set up two concurrent VPN connections from the same remote device. But ... Continue Reading Is there a difference between a wireless access point vs. a router? Yes -- while the two wireless devices are related, they meet different needs in a... Continue Reading Learn the differences between site-to-site VPNs vs. remote-access VPNs and find out about the protocols, benefits and the data security methods used ... Continue Reading
OPCFW_CODE
package kaptainwutax.minemap.ui.component; import kaptainwutax.minemap.ui.map.MapPanel; import kaptainwutax.seedutils.mc.Dimension; import kaptainwutax.seedutils.mc.MCVersion; import java.util.*; import java.util.concurrent.atomic.AtomicBoolean; public class TabGroup { private final MCVersion version; private long worldSeed; protected Map<Dimension, MapPanel> mapPanels = new LinkedHashMap<>(); public TabGroup(MCVersion version, String worldSeed, int threadCount) { this(version, worldSeed, threadCount, Dimension.values()); } public TabGroup(MCVersion version, String worldSeed, int threadCount, Dimension[] dimensions) { this.version = version; if(worldSeed.isEmpty()) { this.loadSeed(new Random().nextLong(), threadCount, dimensions); return; } try { this.loadSeed(Long.parseLong(worldSeed), threadCount, dimensions); } catch(NumberFormatException e) { this.loadSeed(worldSeed.hashCode(), threadCount, dimensions); } } public MCVersion getVersion() { return this.version; } public long getWorldSeed() { return this.worldSeed; } public Collection<MapPanel> getMapPanels() { return this.mapPanels.values(); } private void loadSeed(long worldSeed, int threadCount, Dimension[] dimensions) { this.worldSeed = worldSeed; for(Dimension dimension : dimensions) { this.mapPanels.put(dimension, new MapPanel(this.getVersion(), dimension, this.worldSeed, threadCount)); } } public void add(WorldTabs tabs) { String prefix = "[" + this.version + "] "; AtomicBoolean first = new AtomicBoolean(true); this.mapPanels.forEach((dimension, mapPanel) -> { String s = dimension.getName().substring(0, 1).toUpperCase() + dimension.getName().substring(1); tabs.addMapTab(prefix + s + " " + this.worldSeed, this, mapPanel); if(first.get()) { tabs.setSelectedIndex(tabs.getTabCount() - 1); first.set(false); } }); } public void invalidateAll() { this.mapPanels.values().forEach(MapPanel::restart); } public void removeIfPresent(MapPanel mapPanel) { this.mapPanels.entrySet().removeIf(e -> e.getValue() == mapPanel); } public boolean contains(MapPanel mapPanel) { return this.mapPanels.containsValue(mapPanel); } public boolean isEmpty() { return this.mapPanels.isEmpty(); } }
STACK_EDU
extern crate users; use users::{Users, Groups, UsersCache}; use users::os::unix::{UserExt, GroupExt}; //use users::os::bsd::UserExt as BSDUserExt; extern crate env_logger; fn main() { env_logger::init(); let cache = UsersCache::new(); let current_uid = cache.get_current_uid(); println!("Your UID is {}", current_uid); let you = cache.get_user_by_uid(current_uid).expect("No entry for current user!"); println!("Your username is {}", you.name().to_string_lossy()); println!("Your shell is {}", you.shell().display()); println!("Your home directory is {}", you.home_dir().display()); // The two fields below are only available on BSD systems. // Linux systems don’t have the fields in their `passwd` structs! //println!("Your password change timestamp is {}", you.password_change_time()); //println!("Your password expiry timestamp is {}", you.password_expire_time()); let primary_group = cache.get_group_by_gid(you.primary_group_id()).expect("No entry for your primary group!"); println!("Your primary group has ID {} and name {}", primary_group.gid(), primary_group.name().to_string_lossy()); if primary_group.members().is_empty() { println!("There are no other members of that group."); } else { for username in primary_group.members() { println!("User {} is also a member of that group.", username.to_string_lossy()); } } }
STACK_EDU
Fix dryrun option for SQL/Cassandra schema update command What changed? Fix dryrun option for SQL/Cassandra schema update command. The current dryrun is not really "dry", it will: If not providing a database, it will create a new database to perform the operation. It's not useful because this will not simulating what will happen without dryrun. If providing a database, it will DROP all the tables in the database, which is DANGEROUS to do. Remove "-y" as dryrun option. I believe this is a mistake at the early time. "-y" seems to be for skipping prompt confirmation. Why? Cadence users complaint about this and it has caused database to drop. How did you test it? Local test --user uber --password uber --plugin mysql --db cadence_visibility update-schema -d ./schema/mysql/v57/visibility/versioned/ --dryrun 2021/02/17 13:58:45 UpdateSchemeTask started, config=&{DBName: TargetVersion: SchemaDir:./schema/mysql/v57/visibility/versioned/ IsDryRun:true} 2021/02/17 13:58:45 In DryRun mode, this command will only print queries without executing... 2021/02/17 13:58:45 DryRun of updating to version: 0.1, manifest: &{0.1 0.1 base version of visibility schema [base.sql] 11a94c94f6e45a2dbea217c014db8939} 2021/02/17 13:58:45 DryRun query:CREATE TABLE executions_visibility (domain_id CHAR(64) NOT NULL,run_id CHAR(64) NOT NULL,start_time DATETIME(6) NOT NULL,execution_time DATETIME(6) NOT NULL,workflow_id VARCHAR(255) NOT NULL,workflow_type_name VARCHAR(255) NOT NULL,close_status INT, close_time DATETIME(6) NULL,history_length BIGINT,memo BLOB,encoding VARCHAR(64) NOT NULL,PRIMARY KEY (domain_id, run_id)); 2021/02/17 13:58:45 DryRun query:CREATE INDEX by_type_start_time ON executions_visibility (domain_id, workflow_type_name, close_status, start_time DESC, run_id); 2021/02/17 13:58:45 DryRun query:CREATE INDEX by_workflow_id_start_time ON executions_visibility (domain_id, workflow_id, close_status, start_time DESC, run_id); 2021/02/17 13:58:45 DryRun query:CREATE INDEX by_status_by_close_time ON executions_visibility (domain_id, close_status, start_time DESC, run_id); 2021/02/17 13:58:45 DryRun of updating to version: 0.2, manifest: &{0.2 0.2 add task_list field to visibility [add_task_list.sql] 75ca3e95da717fdc00b05b55b563532a} 2021/02/17 13:58:45 DryRun query:ALTER TABLE executions_visibility ADD task_list varchar(255) DEFAULT ''; 2021/02/17 13:58:45 DryRun of updating to version: 0.3, manifest: &{0.3 0.2 add close time and closed state index [vs_index.sql] 5c0df1a919b33cdb1902180ab189c7f8} 2021/02/17 13:58:45 DryRun query:CREATE INDEX by_close_time_by_status ON executions_visibility (domain_id, close_time DESC, run_id, close_status); 2021/02/17 13:58:45 UpdateSchemeTask done Potential risks NO Coverage increased (+0.04%) to 64.569% when pulling ddb539235ff200542c655fab32cd5570461d7729 on qlong-dry-run into 2f822ee6bd8dddf24efa2c6c05316ee4c9674d5e on master.
GITHUB_ARCHIVE
Port to Python 3 Hi, anyone willing to port jpype to python 3? I know there is the @tcalmant port. But it is not up to date. I could help out porting the Python part but probably won't be a big help porting the C code ;) Nice infos can be found here: http://python3porting.com/ https://docs.python.org/3/howto/pyporting.html Hi, When I started porting the project, I made a very small documentation (in French, sorry :p). Here are the links, maybe Google translate can help for now. I'll translate them ASAP. https://github.com/isandlaTech/cohorte-runtime/blob/master/documents/doc-jpype-py3/source/jpype-py.rst https://github.com/isandlaTech/cohorte-runtime/blob/master/documents/doc-jpype-py3/source/jpype-c.rst https://github.com/isandlaTech/cohorte-runtime/blob/master/documents/doc-jpype-py3/source/liens.rst FYI, the most "painful" parts for the conversion are: the conversion of string/unicode to bytes/string in Python 3 the replacement of PyObject by PyCapsule to let Python keep track of our C objects Finally, the setup.py file of jpype-py3 adds support for Cygwin, if you're interested. @baztian porting the c-part is also the hardest part. I've tried once, but especially the string conversion is really painful. Here is a translated version of the C++ part of my previous links: https://github.com/tcalmant/jpype-py3/wiki/py3_cpp Another solution could be to update the jpype-py3 fork project, and to merge them once they'll have equivalent features :) Hi @tcalmant, your documentation looks really promising. Maybe it convices @marscher to give it another try on the c/c++ part :) a nice starting point would be to have equal test suites for py2/3, so progress can be tracked during development. That's definitely something where I can help / what I can do. "Martin K. Scherer"<EMAIL_ADDRESS>schrieb: a nice starting point would be to have equal test suites for py2/3, so progress can be tracked during development. Reply to this email directly or view it on GitHub: https://github.com/originell/jpype/issues/119#issuecomment-73742639 -- Diese Nachricht wurde von meinem Android-Mobiltelefon mit K-9 Mail gesendet. @tcalmant, do you think its reasonable to wrap PyObject/PyEncapsulate in our own wrapper class for compatibility or might it be enough to define some "clever" macros? Well, PyCapsule has a backport for Python 2.7 and 3.1+, so it could be used directly if you don't target Python 2.6. BUT, I think that wrapping this kind of class is a better option: the Python C API has proved it is no fully stable on this kind of subject, so having a custom wrapping would simplify the maintainance of the library. Also, +1 for test suites :) I thought we have already dropped out support for Python 2.6, so we could use PyCapsulate. A wrapper would also be nice, but it also add additional overhead. Python 2.6 is still in the travis build matrix OK, tests and python code should now be python 3 compatible. Anyone willing to port the cpp part? Please merge tcalmant's tests. Current tests do not work because of old print syntax (for Python2 only). What kind of tests do you mean? We have a working testsuite. Please find the tests at https://github.com/originell/jpype/tree/master/test/jpypetest and the results at https://travis-ci.org/originell/jpype. Print statements in unit tests are generally a bead idea. We don't have print statements in our tests. I am talking about the parent folder: https://github.com/originell/jpype/tree/master/test The files there include print statements. You should fix for Python3 usage, as so tcalmant has obviously done. Thanks. Bytecompiling .py files below /home/builder/rpmbuild/BUILDROOT/jpype-0.6.1-1.x86_64/usr/lib64/python3.4 using /usr/bin/python3.4 *** Error compiling '/home/builder/rpmbuild/BUILDROOT/jpype-0.6.1-1.x86_64/usr/lib64/python3.4/site-packages/jpype/test/buf_leak_test.py'... File "/usr/lib64/python3.4/site-packages/jpype/test/buf_leak_test.py", line 19 print 'string got deleted' ^ SyntaxError: Missing parentheses in call to 'print' *** Error compiling '/home/builder/rpmbuild/BUILDROOT/jpype-0.6.1-1.x86_64/usr/lib64/python3.4/site-packages/jpype/test/buf_leak_test3.py'... File "/usr/lib64/python3.4/site-packages/jpype/test/buf_leak_test3.py", line 27 print 'created string', cnt ^ SyntaxError: Missing parentheses in call to 'print' *** Error compiling '/home/builder/rpmbuild/BUILDROOT/jpype-0.6.1-1.x86_64/usr/lib64/python3.4/site-packages/jpype/test/convtest.py'... File "/usr/lib64/python3.4/site-packages/jpype/test/convtest.py", line 31 print 'Running baseline test : converting a python string->array.array->JArray(JByte). size = ', len(data)/1024.0, 'kb' ^ SyntaxError: invalid syntax *** Error compiling '/home/builder/rpmbuild/BUILDROOT/jpype-0.6.1-1.x86_64/usr/lib64/python3.4/site-packages/jpype/test/findjvm.py'... File "/usr/lib64/python3.4/site-packages/jpype/test/findjvm.py", line 20 print os.path.dirname(os.path.dirname(jvmlib)) ^ SyntaxError: invalid syntax *** Error compiling '/home/builder/rpmbuild/BUILDROOT/jpype-0.6.1-1.x86_64/usr/lib64/python3.4/site-packages/jpype/test/java_dom.py'... File "/usr/lib64/python3.4/site-packages/jpype/test/java_dom.py", line 55 print count, "iterations in", t2-t, "seconds" ^ SyntaxError: Missing parentheses in call to 'print' *** Error compiling '/home/builder/rpmbuild/BUILDROOT/jpype-0.6.1-1.x86_64/usr/lib64/python3.4/site-packages/jpype/test/java_sax.py'... File "/usr/lib64/python3.4/site-packages/jpype/test/java_sax.py", line 72 print count, "iterations in", t2-t, "seconds" ^ SyntaxError: Missing parentheses in call to 'print' *** Error compiling '/home/builder/rpmbuild/BUILDROOT/jpype-0.6.1-1.x86_64/usr/lib64/python3.4/site-packages/jpype/test/lists_and_maps.py'... File "/usr/lib64/python3.4/site-packages/jpype/test/lists_and_maps.py", line 13 print arr ^ SyntaxError: Missing parentheses in call to 'print' *** Error compiling '/home/builder/rpmbuild/BUILDROOT/jpype-0.6.1-1.x86_64/usr/lib64/python3.4/site-packages/jpype/test/python_dom.py'... File "/usr/lib64/python3.4/site-packages/jpype/test/python_dom.py", line 49 print count, "iterations in", t2-t, "seconds" ^ SyntaxError: Missing parentheses in call to 'print' *** Error compiling '/home/builder/rpmbuild/BUILDROOT/jpype-0.6.1-1.x86_64/usr/lib64/python3.4/site-packages/jpype/test/stub.py'... File "/usr/lib64/python3.4/site-packages/jpype/test/stub.py", line 22 print s ^ SyntaxError: Missing parentheses in call to 'print' *** Error compiling '/home/builder/rpmbuild/BUILDROOT/jpype-0.6.1-1.x86_64/usr/lib64/python3.4/site-packages/jpype/test/test_awt.py'... File "/usr/lib64/python3.4/site-packages/jpype/test/test_awt.py", line 5 print 'Thread started' ^ SyntaxError: Missing parentheses in call to 'print' *** Error compiling '/home/builder/rpmbuild/BUILDROOT/jpype-0.6.1-1.x86_64/usr/lib64/python3.4/site-packages/jpype/test/test_jarray_fixes.py'... File "/usr/lib64/python3.4/site-packages/jpype/test/test_jarray_fixes.py", line 45 print 'iteration:', i ^ SyntaxError: Missing parentheses in call to 'print' *** Error compiling '/home/builder/rpmbuild/BUILDROOT/jpype-0.6.1-1.x86_64/usr/lib64/python3.4/site-packages/jpype/test/testlucene.py'... File "/usr/lib64/python3.4/site-packages/jpype/test/testlucene.py", line 8 raise IOError, "Please provide %s" % path.abspath(lucene_jar) ^ SyntaxError: invalid syntax https://github.com/originell/jpype/issues/153
GITHUB_ARCHIVE
Transcript from the "Permissions & Security" Lesson >> So let's go over permissions and security and how that works before actually starting seeing those APIs, those capabilities that I'm promising. So permissions and security. So some permissions, so some APIs are harmless. So they have no cost, no cost in money, no cost in privacy. I don't know, like it's just some information, some ability or I don't know - playing a sound. [00:00:32] Playing a sound over the speakers, it can harm your ears, yeah, but it has no cost. Some other APIs and permissions, they open a privacy risk. For example, opening the camera, opening the microphone, getting the user's location, the geolocation, so where you are on Earth. And yeah, there is a privacy risk there. [00:00:58] And sometimes it can involve a cost, such as making a call, sending an SMS. Okay, so there is a real cost there. So when there are costs or risks, some browsers decide to put a limit on the usage of that capability. The W3C specs of the standard for the web platform, typically that's not to say what you do. [00:01:24] So sometimes there are recommendations that the standard recommends the user-agent. So the browser to limit this API, but each browser decides how to implement that. Okay, let's say that most of the browser's are kind of the same but there are some API's that are different. So for example, on Firefox, you might see a permission dialog for using that API and in Chrome, you don't. [00:01:52] Okay, so it's not the same experience on every browser for some API's. Sometimes they just need user engagement requirements, for example, Chrome. Remember I mentioned one API that we won't cover today, that was known as Background Sync. Background Sync lets you sync your data in the background a few minutes later or a few hours later. [00:02:16] Let's say you're on an app, I know it's a calendar app, webapp. You're storing your new kind of entry but for some reason the server is down or you don't have connection you're offline for whatever reason you are on a plane and that plane has no Wi Fi. [00:02:36] So what to do with that? Yeah, you can save it on IndexedDB locally, but yeah, if you don't open that up again, that new entry will never hit the server. Well, with Background Sync, you can ask the browser, hey browser, I know we don't have connection right now, but can we try later, even if the user forgets about this? [00:02:59] Well, background sync will do that. When you get out of the plane and you turn off airplane mode, the browser, even if the user is not opening the browser. The browser is running in the background and it will know that now we do have connection. Okay, I have a pending thing. [00:03:15] Let's try to do that Chrome will not give the permission to the user permission dialog at all. So if you're the developer, you're asking for that sync operation the requirement to use that capability is a user engagement requirement, which roughly means. That Chrome will check if the user is currently using your app or not. [00:03:40] If it's using your app frequently, and that algorithm is, I don't want to say secret because it's open source, but it's not something that you can manage. So if there is enough user engagement, when you try to use the API, it will grant you access. If not, it will deny you access. [00:04:03] But the most common scenario, and probably you know that as a user, is to see a permission dialog. You enter a Zoom call from the website, or a Google Meet call, or a Microsoft Teams call, from the browser, and the first thing you see is a dialogue asking you for a camera and microphone access, okay? [00:04:23] Or Google Maps asking you for geolocation access. Well, that's a permission dialogue, okay? cool, most capabilities these days, require HTTPS. Mostly on Chromium-based browsers. Safari is still supporting HTTP for that, so if you want to get users location, if you want to connect to a bluetooth device, you must be in HTTPS. [00:04:49] And you know that that's a restriction for having a website today. So also some capabilities will need the user interaction to be enabled. For example, microphone and camera, or what that depends on the browser, but geolocation, bluetooth, Web Push, sometimes for the user to see a dialog with the permission dialog. [00:05:16] The user has to first click on a button you can not request that permission when the page loads, because that's a user experience problem. Before this we were getting into websites that were asking for permissions all the time you don't know why you're there yet, so you typically say no. [00:05:37] So now you require a user interaction before actually asking for permission. Permissions are granted on an origin base, so if the user is granting you permission for a capability, you will get that permission for the whole origin, so the whole domain. Different HTMLs, different folders in the same domain will also get granted that permission. [00:05:59] You cannot narrow that permission per folder, or per file, or per PWA, or per manifest, or per service worker, no, it's per origin. Okay, which is good and bad, it's just what it is. If the user denies a permission, the API won't be able to ask again. You can ask, but it will be automatically denied. [00:06:47] Okay, If the user grants the permission, depending on the capability, it may have no time limit. So it will be like forever, you are granted forever, for a couple of days, sometimes it's 15 days, 13 days a week, sometimes for a session or sometimes per one usage. It depends on the capability and it depends on the browser. [00:07:12] For example, camera and microphone, it's typically per session. Geolocation is typically per usage, so it depends. It depends on the capability. You cannot manage that, okay? Just keep in mind and that's the user experience. It's the best scenario world.
OPCFW_CODE
Before coming to Galvanize, I traveled for like six months. I was working as a sales operations manager at an event app startup. There had been a bunch of changes at the company, as with any young tech company. When I joined it was 20 to 25 people, and when I left it was more than a hundred. The company had been evolving over time, and I didn’t necessarily evolve with it. So I took some time off to travel. I quit my job and a few days later went up and did the Tahoe Rim Trail, which is a 180 mile trail that follows the mountain ridge around town. Then I flew to Europe, and later went to Costa Rica. I kind of went all over the place, just not having screen-time for a few months. I was on a really back part of the Tahoe Rim Trail. Most of it is pretty well-hiked, but with so many miles, some of it isn’t very well-maintained. We came across a bear and her two cubs, and had a pretty close encounter. We ended up singing a song to her. We’d met some Pacific Crest Trail hikers the day before, and they told us to sing if we saw a bear. Prey runs away, and predators will growl and snarl, but nothing really sings. I’m also a musician, so we sang “Some Nights” by Fun. I see data science as an extension of what I was doing before. After majoring in physics in college, I went into consulting, and there I quickly became “the Excel guy.” Then when I joined the event startup, we didn’t have a strategy department, so I was doing all sorts of strategy on top of Excel work. I thought: “Wouldn’t I be better at this if I had a trained skill set in it?” Rather than just thinking “I’m good at numbers because I took physics in college.” Plus I wanted to know Python. I had no coding experience coming in, and Python is way more powerful than anything I could do with Excel. And way more fun doing it. In the short term, I’m hoping to get a startup job—somewhere I can get on a data science team. In the long run, I think it’ll set me up well for a PhD in economics. All my heroes are economists. Paul Krugman. Bernie Sanders. Robert Reich—he’s actually not an economist, he’s a public policy expert, but he’s done a ton of economic work. He’s always writing about economics and income inequality. I read a lot of sci-fi, but you don’t aspire to be an astronaut, or at least I’m not going to be. But everything else that I read is generally politically oriented, and I’m much more interested in the economic side. I tried for a while, while I was on my vacation, to make it as a musician. I play guitar and I sing. It’s very hard, when you live in SoMa, to make it as a musician. But I go out a few times a week and play. I call myself a mixture of modern singer-songwriter, with ’90s rock, with ’80s hair metal. –Joel Shuman, Galvanize Data Science Immersive graduate in San Francisco, California. You can find him on LinkedIn. Want more data science tutorials and content? Subscribe to our data science newsletter.
OPCFW_CODE
Stop using Chrome! Download the Brave Browser via >>> [Brave.com] It's a forked version of Chrome with native ad-blockers and Google's spyware stripped out! Download for Mac, Windows, Android, and Linux! This isn't HASS.IO, this is HASS (aka just Home Assistant in a docker). This version of the install is called "Docker build" in HomeAssistant lingo. The HASS Docker install has no addon button and no pre-built packs like HASS, which is an OS that runs a SERIES OF DOCKERS, kind of like a PG setup but tiny. Anyway, the good news is that you can get in there yourself in ways you cannot with HASS.IO and add anything you would want from the container bash window (and SSL once it is installed) without things getting in your way! Infortunately; however, this is also the bad news. This will work just fine and do all of the same things (and more) but you need to get your knuckles bloody setting it up. Luckily there are lots of walk throughs, but unless you are good at linux you can say goodbye to 3 or 4 days of your life setting it up. Personally, I am running an instance on my "home" PG system (one of 3, long story...) that I use just for monitoring things on it like Plex, Kodi, etc... and then connecting that instance to my HASS.IO install on a Pi. This also gives the opportunity to use Traeffik instead of the new 5 dollar a month profit engine that the Home Assistant team setup and do all of your SSL and Open Port stuff without poking holes in your firewall with no security, but it is still not easy to connect multiple home assistant instances. Mine is not quite up to speed yet or I would do a write up with more instructions TLR; the version PG installs is the regular "docker" version of "Home Assistant", not "HASS.IO" (which has prebuilt addon packs) or "Hassbian" (which is more of an operating system). To find out how to configure a standard HomeAssistant build, go to: https://www.home-assistant.io/docs/installation/docker/ He is looking for the "HASS.IO" build's add-on components that come pre installed in the "HASS.IO" configuration. HASS.IO is basically a RPi OS running Docker Swarm (it now can be run as a "cluster-of-dockers" on a non Pi build) and requires not only a cluster of containers but also tends to conflict with non HASS.IO containers that are not approved. The single-container version that is installed by PG is just called "Home-Assistant - Docker" and it can use many all of the same options (if not more) but it requires manual installs for secondary components which often include adding secondary containers (Node-RED is one that comes to mind but there are hundreds of others) and then linking them. This really is a good solution for our build, fixes many of the remote-server/open port issues that HASSIO can have without paying for a service, and has much more configurability and less overhead than HASS.IO. Unfortunately, it is also hard as F$#%^&! to configure unless you are a Linux expert AND an expert researcher and most of the available community support is neckbeardy and disrespectful to new people in ways I haven't seen outside of a QNAP or TVDB forum. All that being said, the install and configuration for the single-container version is often a slow-motion car wreck with a cliff-shaped learning curve, take WEEKS, and can often bring about "Software-Aquired Tourettes Syndrome" in many cases, including my own!
OPCFW_CODE
stdout option for the tools that directly write to the file Some formatter tools like black from Python have no option for printing to the stdout. There should be an option so that those tools can directly write to the file without warnings. I'm also facing this with F#'s Fantomas. Similar with PHP's PHP-CS-Fixer In my Symfony project (PHP) I built a little wrapper script to overcome this limitation. It's not nice, but it works for now. I created the file bin/dprint-stdout-wrapper.sh #!/usr/bin/env bash # Currently dprint doesn't provide a way to use commands which modify the # file directly instead of printing the result to stdout # see https://github.com/dprint/dprint-plugin-exec/issues/26 # # Therefore we use this little wrapper script which we can call like this # bin/dprint-stdout-wrapper.sh vendor/bin/php-cs-fixer fix {{file_path}} # # The last argument MUST BE the file path as it first calls the command and # then uses cat to print that path to a file # https://unix.stackexchange.com/a/612122 FILE_PATH="${!#}" # see man bash, search for "indirect expansion" # this "pops" the file path from the argument list which we use via $@ set -- "${@:1:$#-1}" $@ $FILE_PATH && cat $FILE_PATH Now I can use the following config in my dprint.json: { // ... "exec": { "cwd": "${configDir}", "commands": [ { "command": "bin/dprint-stdout-wrapper.sh vendor/bin/php-cs-fixer fix {{file_path}}", "exts": ["php"] } ] }, "excludes": [ "**/node_modules", "**/*-lock.json", "assets/vendor/*", "assets/**/controllers.json", "composer.json", "public/assets/*", "package.json", "README.md", "**/*.twig" ], "plugins": [ // ... "https://plugins.dprint.dev/exec-0.5.0.json@8d9972eee71fa1590e04873540421f3eda7674d0f1aae3d7c788615e7b7413d0" ] } @adaliszk I just updated my comment. See "Edit 2" I just noticed one downside: When running dprint fmt the fix is still applied to the file. So we also would need dprint-plugin-exec to provide us with the current mode, maybe via a command template {{mode}}. Then in our script we can decide if we want to use fix or check (or --dry-run, etc) Yeah, I was thinking about making a wrapper like that, but the issue is that there would be two separate reads and writes for the same file. At the moment, I am exploring the option of changing the PHP-CS-Fixer and, in extension, Laravel Pint to have a print mode instead. Yes, that's really not very elegant, just a workaround. Also I now used a similar wrapper for Twig-CS-Fixer and then running prettier with the Plugin for Tailwind automatic class sorting on the file (https://github.com/ttskch/prettier-plugin-tailwindcss-anywhere) After also adding https://daisyui.com/ it was writing log output from daisyUI to my twig files. I now set an env var in my script which disables logging from the tailwind.config.js – also not very nice. Okay, my colleague had issues in VS Code because the command wrote to the file multiple times. I now updated my wrapper script to use a temporary file instead. bin/dprint-stdout-wrapper.sh #!/usr/bin/env bash # Currently dprint doesn't provide a way to use commands which modify the # file directly instead of printing the result to stdout # see https://github.com/dprint/dprint-plugin-exec/issues/26 # # Therefore we use this little wrapper script which we can call like this # bin/dprint-stdout-wrapper.sh vendor/bin/php-cs-fixer fix # # dprint passes the input via stdin which we write to a temporary file. # Then we apply the formatting to the temporary file and write the result # to stdout. dprint then writes the output back to the input file. TEMP_FILE=$(mktemp) # read stdin to temporary file cat - > $TEMP_FILE function cleanup { rm "$TEMP_FILE" } # make sure the temporary file is removed on exit, even in error cases trap cleanup EXIT $@ $TEMP_FILE >&2 && cat $TEMP_FILE To make it work for twig files in VS Code my workaround is even worse, but works now: bin/fix-twig.sh #!/usr/bin/env bash # Currently dprint doesn't provide a way to use commands which modify the # file directly instead of printing the result to stdout # see https://github.com/dprint/dprint-plugin-exec/issues/26 # # Therefore we use this little wrapper script which we can call like this # bin/fix-twig.sh {{file_path}} # # dprint passes the input via stdin which we write to a temporary file. # Then we apply the formatting to the temporary file and write the result # to stdout. dprint then writes the output back to the input file. # We use the file path for prettier to determine the correct parser # https://prettier.io/docs/en/options.html#file-path # We use the file path for prettier to determine the correct parser # https://prettier.io/docs/en/options.html#file-path FILE_PATH="$1" TEMP_FILE=$(mktemp) # read stdin to temporary file cat - > $TEMP_FILE # make sure the temporary file is removed on exit, even in error cases function cleanup { rm -f "$TEMP_FILE" } trap cleanup EXIT CURRENT_SCRIPT_DIR_NAME=$(dirname "${BASH_SOURCE[0]}") vendor/bin/twig-cs-fixer lint --report null --fix $TEMP_FILE # We only need a real file for Twig-CS-Fixer # For prettier we can work with a variable PRETTIER_RESULT="$(cat $TEMP_FILE)" # So we don't need our temp file anymore now rm "$TEMP_FILE" # Throw error after we tried formatting for 25 times FORMATTING_RETRY_LIMIT=25 COUNT=0 # We use prettier with prettier-plugin-tailwindcss to automatically sort tailwind classes according to the recommended # class order. To make it work with twig files we need to use plugin ttskch/prettier-plugin-tailwindcss-anywhere # # There is a bug when there are multiple elements with the exact same class attribute only the classes of the first # element are sorted. See https://github.com/ttskch/prettier-plugin-tailwindcss-anywhere/issues/2 # # We therefore run prettier several times until formatting no longer takes place. while true do if [ $((++COUNT)) -gt $FORMATTING_RETRY_LIMIT ]; then # echo to stderr so we're able to see it in the dprint output >&2 echo "Tried to format $FORMATTING_RETRY_LIMIT times with differing results. Twig formatting seems to be unstable." exit 1 fi # We use the DISABLE_DAISY_LOGS environment variable to disable daisyUI logs in our tailwind.config.js file # When logs are enabled that output would be written to our Twig files. PRETTIER_RESULT_UPDATE=$(echo -e "$PRETTIER_RESULT" | \ DISABLE_DAISY_LOGS=1 \ node_modules/.bin/prettier \ --config "$CURRENT_SCRIPT_DIR_NAME/.prettierrc" \ --stdin-filepath "$FILE_PATH") if [ "$PRETTIER_RESULT_UPDATE" != "$PRETTIER_RESULT" ]; then PRETTIER_RESULT="$PRETTIER_RESULT_UPDATE" else break fi done echo -e "$PRETTIER_RESULT" My dprint.json looks like: { // ... "markup": { "associations": [ "!**/*.twig" ] }, "exec": { "cwd": "${configDir}", "commands": [ { "command": "bin/ci-cd/dprint-stdout-wrapper.sh vendor/bin/php-cs-fixer fix", "exts": ["php"] }, { "command": "bin/ci-cd/fix-twig.sh {{file_path}}", "exts": ["twig"] } ] }, "excludes": [ "**/node_modules", "**/*-lock.json", "assets/vendor/*", "assets/**/controllers.json", "composer.json", "public/assets/*", "package.json" ], "plugins": [ // ... "https://plugins.dprint.dev/exec-0.5.0.json@8d9972eee71fa1590e04873540421f3eda7674d0f1aae3d7c788615e7b7413d0" ] } I just opened: https://github.com/PHP-CS-Fixer/PHP-CS-Fixer/discussions/8309 https://github.com/VincentLanglet/Twig-CS-Fixer/discussions/332
GITHUB_ARCHIVE
1.5. Modifying data¶ This chapter will explain the basic mechanisms for adding data to tables, removing data from tables, and modifying data. 1.5.1. Tables used in this chapter¶ For this chapter, we will work with the tables bookstore_inventory and bookstore_sales, which simulate a simple database that a seller of used books might use. The bookstore_inventory table lists printed books that the bookstore either has in stock or has sold recently, along with the condition of the book and the asking price. The column stock_number is a unique identifier for each book. When the bookstore sells a book, a record is added to the bookstore_sales table. This table lists the stock_number of the book sold, the date sold, and the type of payment. The column receipt_number is a unique identifier for the sale. (This may not be a very good database design; it assumes we only sell one book at a time!) A full description of these tables can be found in Appendix A. 1.5.2. Adding data using INSERT¶ To add rows to a table in the database, we use a statement starting with the keyword INSERT. In its simplest form, INSERT lets you add a single row to a table by providing a value for each column in the table as defined. As an example, suppose a customer at our bookstore purchases our copy of One Hundred Years of Solitude by Gabriel García Márquez. This book is listed in our inventory with a stock number of 1455. The customer purchases the book on August 14, 2021 and pays cash. Finally, we provide a receipt to the customer with receipt number 970. In the database, the table bookstore_sales is defined with these columns: receipt_number, stock_number, date_sold, and payment. We could record the sale using this statement: Try the above statement in the interactive tool, then use a SELECT query to verify that the new data has been added. Note that the order in which the values are listed matches the order in which columns are defined for the bookstore_sales table. 220.127.116.11. Specifying columns¶ Performing the insert as we did above works fine when we know for certain how a table has been defined in a database. However, tables change over time in practice, which may result in columns appearing in a different order, or in more columns being added to the table. If this happens, old SQL code that makes assumptions about the table structure will break. So it is a better practice to provide not only the data, but the names of the columns in which we want to put the data. To do this, we simply list the column names in parentheses after the table name: INSERT INTO bookstore_sales (receipt_number, stock_number, date_sold, payment) VALUES (971, 1429, '2021-08-15', 'trade in'); As described in Chapter 1.6, it is possible to have some of our columns be automatically generated by the database. For example, if we add a new book to our inventory, we want to generate a new, unique stock number. The bookstore_inventory table is set up to do this. When the database generates values like this for us, we should not provide a value for the generated column. Specifying column names lets us insert data for only the non-generated columns. The bookstore_sales table is likewise set up to generate unique receipt_number values; above we provided values for the receipt number, which only works as long as the values we provide are not already used. The bookstore_sales table also has a default setting for the date_sold column - it will put in today’s date for you if you do not provide a value for the column. Here is how the bookstore_sales table might be used in practice: INSERT INTO bookstore_sales (stock_number, payment) VALUES (1460, 'cash'); 18.104.22.168. Inserting multiple rows¶ While it is perfectly valid to do multiple INSERT statements to add multiple rows of data, SQL also lets you provide multiple rows in a single INSERT statement. Perhaps we wish to enter all of a day’s sales in one statement. We can enter this query: INSERT INTO bookstore_sales (stock_number, payment) (1444, 'credit card'), (1453, 'credit card') (Note for Oracle users: Oracle does not permit multiple rows in an INSERT.) 22.214.171.124. Inserting query results¶ SQL also provides the capability of providing values via a SELECT query. As a somewhat contrived example, suppose we create another table named bookstore_recent_sales with columns named author and title. We will store data in this table about books we sold recently (perhaps to see what books and authors are popular, to inform our purchasing). We might want to fill this table with the unique books that have been sold in the past month. The syntax is the same as a regular INSERT, but with the VALUES clause replaced by a SELECT query (which must return columns of the same type and in the same order as the columns we are inserting into). Try the statements below to see this in action. CREATE TABLE bookstore_recent_sales (author TEXT, title TEXT); INSERT INTO bookstore_recent_sales (author, title) SELECT DISTINCT i.author, i.title bookstore_inventory AS i JOIN bookstore_sales AS s ON s.stock_number = i.stock_number WHERE s.date_sold BETWEEN '2021-08-01' AND '2021-08-31'; 1.5.3. Removing data with DELETE¶ Removing rows from a table is accomplished using DELETE statements. DELETE statements are generally very simple, requiring only a FROM clause and optionally a WHERE clause. You can delete data from only one table at a time. As an example, if we want to remove all sales from bookstore_sales prior to August 1, 2021, we could write: This is probably a bad idea unless we first delete the data from bookstore_inventory for the books we are deleting - otherwise we might think that we still have those sold books. Since we cannot delete data from multiple tables in one query (e.g., using a join) it may be tricky to see how to get rid of the appropriate rows from bookstore_inventory. The information about what rows we want to delete is actually in bookstore_sales (in the date_sold column). The technique we need will be covered in Chapter 1.8 - using a subquery. Here is the necessary query, given without explanation for now: DELETE FROM bookstore_inventory WHERE stock_number IN (SELECT stock_number FROM bookstore_sales WHERE date_sold < '2021-08-01') In Chapter 1.7 we will discuss other techniques for keeping multiple tables consistent with each other. If the WHERE clause is omitted in a DELETE query, then all data from the table is removed. As with any data modification statement, the effects of a DELETE statement are immediate and permanent. To some extent, you can undo the result of an INSERT with a DELETE if you know which rows you inserted; however, it is impossible to restore deleted rows unless you have a backup of the data. Thus, it is very important to be sure you are deleting only what you want to delete. A simple way to test this before you perform a delete is to replace DELETE with SELECT * in your statement - this will show you exactly the rows that your statement would delete. Remember that with our interactive examples, any changes you make to this book’s database only last for the current viewing session, so if you wish to restore the deleted data, you may do so by refreshing the page in your browser. 1.5.4. Modifying data with UPDATE¶ One of the most powerful capabilities SQL provides is data modification using UPDATE statements. The form of an UPDATE is: column1 = expression1, column2 = expression2, Often, we may want to update a single row in our database. For example, perhaps we examine one of the books in our bookstore inventory and decide that its condition is better than we initially thought. Our copy of Slow River by Nicola Griffith (stock number 1460) is listed as in fair condition, with a price of 2 (in some unit of currency). We want to upgrade the condition to “good” and raise the price to 2.50 at the same time: We can also update multiple rows at a time. Perhaps we mistakenly put in all sales for August 1, 2021 as July 31 instead. We can fix these in one query: SET date_sold = '2021-08-01' WHERE date_sold = '2021-07-31'; Of course, this only works if none of the sales marked as July 31 were correct; we might have to be more clever with our WHERE clause if not. The real power of UPDATE, though, is that the right hand side of the assignments in the SET clause can be expressions, and these expressions are based on the row being updated. Hence, we can do something like the following: SET price = price + 0.25; This would raise the price of every book by 0.25. 1.5.5. Other data modification statements¶ SQL provides some other data modification statement types, which may or may not be supported in your database. TRUNCATE TABLE removes all rows from a table, and is typically faster than DELETE (but can only be used to remove all rows). MERGE is a somewhat complex operation that combines inserts, updates, and deletes, allowing synchronization of a table with another table or join of tables. Neither of these operations is strictly necessary, given that the same results can be accomplished with INSERT, UPDATE, and DELETE. We will not cover them further in this book. 1.5.6. Self-check exercises¶ This section contains exercises on INSERT, UPDATE, and DELETE, using the bookstore_inventory and bookstore_sales tables. Keep in mind that the database we are using for these exercises is shared with the interactive examples above, so any changes you have applied in an interactive tool above are reflected in the database you use below. If the results you get are not what you are expecting, you may need to reload this page in your browser to get a fresh copy of the database. If you get stuck, click on the “Show answer” button below the exercise to see a correct answer. Write a statement to add the book House Made of Dawn by N. Scott Momaday to the bookstore_inventory table. Use 1471 for the stock number, ‘like new’ for the condition, and 4.75 for the price. Write a statement to add all books by John Steinbeck (from our books table) into bookstore_inventory with a condition of ‘new’ and a price of 4.00. Note that there is no good way to provide unique stock numbers for each of these books, but if you omit the stock_number column entirely, the bookstore_inventory table is set up to provide unique values automatically. Write a statement to remove all books from bookstore_inventory that are in ‘fair’ condition. Write a statement to change the payment type to ‘cash’ for the sale with receipt number 963. Write a statement to set the price (in our bookstore inventory) for all books by Clifford Simak to a special sale price of 1.0. Write a statement to double the price of all books in ‘new’ condition.
OPCFW_CODE
Sunday had less time to be at the radio for ISS passes but one pass was ok. It started with the end of one image, one full image and the start of the next image. After hickups in recording audio from the radio on two previous passes I rebooted the whole system (it was nagging about a reboot anyway) and I received two more partial images. Thanks to ARISS Russia team member Sergey Samburov, RV3DR for making this possible! Second pass of the International space station gave me one partial picture and one complete (with some noise). In this weekend there are extra slow scan tv (SSTV) transmissions from the international space station (ISS). The ISS moves across the sky when viewed from earth so I calculate beforehand when it will pass across the sky and what the trajectory will be. I woke up in time to be outside for the first one. A low pass over the horizon and most of the pass matched a pause between transmissions, so not much image received. Het blijft actueel: Verschillende afpersmails in omloop - Fraudehelpdesk. Ik zie ze zelf ook op verschillende plekken. Trap hier niet in. Dit keer een bitcoin adres waar nog geen transacties in zichtbaar zijn: 12PUa2SHjWAUEpZZUxQNvxa7epab7g2Ksb alleen is mij niet duidelijk of deze site het verschil tussen een echt aangemaakt adres zonder transacties of een willekeurig adres weet. Toevoeging 2019-02-07: Een bedrag van 808 dollars in bitcoins staat nu in de wallet, in 2 transacties. Gegeven het bedrag in het originele mailtje zijn er dus 2 mensen ingetrapt. Toevoeging 2019-02-11: Er is nu over de 3000 dollar in bitcoins binnen. Als ik zo naar de transacties kijk lijken er 7 mensen ingetrapt. Nog meer informatie: Bitcoin Abuse Database for 12PUa2SHjWAUEpZZUxQNvxa7epab7g2Ksb (engelstalig). I noticed something really weird in the kernel log of a virtual machine:Feb 5 11:46:54 server kernel: [2936066.990621] Bluetooth: Core ver 2.22 Feb 5 11:46:54 server kernel: [2936067.005355] NET: Registered protocol family 31 Feb 5 11:46:54 server kernel: [2936067.005901] Bluetooth: HCI device and connection manager initialized Feb 5 11:46:54 server kernel: [2936067.006404] Bluetooth: HCI socket layer initialized Feb 5 11:46:54 server kernel: [2936067.006838] Bluetooth: L2CAP socket layer initialized Feb 5 11:46:54 server kernel: [2936067.007280] Bluetooth: SCO socket layer initialized Feb 5 11:46:54 server kernel: [2936067.009650] Netfilter messages via NETLINK v0.30. Feb 5 11:46:54 server kernel: [2936067.056017] device eth0 entered promiscuous modeThe last two are the giveaway about what really happened: I started tcpdump to debug a problem. But I did not expect (and do not need) bluetooth drivers on a virtual machine, it will never have access to a bluetooth dongle. After setting up /etc/modprobe.d/local-config.conf withblacklist bluetoothtcpdump still works fine and no bluetooth drivers are loaded. Update: Most recommendations are to disable the bluetooth network family:alias net-pf-31 off After a month with three digimode radio contests I plotted the number of amateur radio contacts again. The number of contacts is clearly higher each January as a contest month, with this January a new peak. The contests were the ARRL RTTY Roundup on 6 and 7 January, the UBA PSK63 prefix contest on 12 and 13 January and the BARTG RTTY Sprint Contest on 26 and 27 January. Nicer looking font due to the upgrade of "radio workstation" thompson. I guess even gnuplot is coming along with the modern times. before, before, before, before I have "always" been running amanda for backups on linux. Or rather, I can't find any indication when I started doing that several homeserver versions ago, it's just still running. Or it was running, but first I had to tackle a hardware problem: all SCSI controllers I have are PCI and the newest homeserver has no PCI slots. So I searched for a solution. The first solution was to try using the desktop system for the tapedrive, but the powersupply in that system has no 4-lead Molex connectors so I can't connect the tapedrive. For now I use an old 'test' system with some software upgrades to run amanda and shut it down when all backups are done and flushed to tape. But amanda had a serious problem writing stuff to tape. With some debugging this turned out to be caused by the variable blocksize I used on the previous systems, with# mt -f /dev/nst0 setblk 0and I can't even find out why this seemed like a good idea years ago. But now amanda really wants to use 32768 byte blocks and filled a DDS-3 tape (12 Gb without compression) with about 1.8 Gb of data before reaching the end of the tape. Why this default has changed isn't clear to me, but I found a way to re-initialize the tapes so the backups fit again. Based on block size mismatch - backup central I created a script to do this. I did not get the error about the blocksize, but I searched specifically for 'amanda 3.3.6 blocksize'.#!/bin/sh if [ "$1" = "" ]; then echo "Usage: $0 <tapename>" fi mt -f /dev/nst0 setblk 32768 mt -f /dev/nst0 compression 1 mt -f /dev/nst0 rewind dd if=/dev/zero of=/dev/nst0 bs=32768 count=200 mt -f /dev/nst0 setblk 32768 mt -f /dev/nst0 compression 1 mt -f /dev/nst0 rewind amlabel -f kzdoos $1And now normal amounts of data fit on a tape again. I just have to initialize every tape before using it for the first time in this setup. https://idefix.net/: Last post to be automatically imported into For years I automatically imported posts from google+ into my homepage at https://idefix.net/ and made them available on my own timelines. This is one of the things about Google+ I like: it's relatively easy to get access to the content and use it in other places. Google+ does not have (did not have) the tendency to suck in your data and keep it shielded from the outside world. This is why I liked it over other social networks. I don't expect a social network to keep things I post private. There's always that stalker in the back of my mind when sharing things online. So anything I post is completely public anyway, no need to keep it locked in. If I post a solution to some problem it's for anybody to read. And laugh at, snicker, or maybe use the solution. Byebye Google+ API. You will be missed. This weekend I participated in the BARTG (British Amateur Radio Teledata Group) RTTY Sprint Contest. I went into this contest with the idea of maybe getting some contacts and things turned out somewhat better than that: I made 82 contacts. No new countries or anything else special. The one that got away was PJ4P, Bonaire. I saw that station calling and I kept answering but the contact did not happen. I used the topendfed antenna outside and the amplifier. So I entered in the high power category. As with other recent contests the propagation wasn't cooperating very well. When I started in HF at home (October 2014) I would switch from 10 to 20 meters after it got dark because of the changing propagation. Now I change from 20 to 40 meters as soon as it starts to get a bit dark. : Fun in packaging: Hi mum!
OPCFW_CODE
The world of coding and programming can be an intimidating place for the uninitiated. It’s difficult to find a clear explanation of how it all works that isn’t laden with industry jargon that requires complex explanations. Consider this post the first of several in a Programming, Deconstructed series; our attempt at unpacking the topic and explaining the fundamentals of programming in a way that is accessible to everyone, regardless of their background. Like most complex topics, knowledge about programming is cumulative. So before we dig into a discussion of basic programming concepts or compare different languages, we need to answer the most fundamental question: what is programming at the most basic level? Let's start off by talking about a computer we all love to hate—the human brain. It’s a wildly versatile organ, allowing us to determine everything from how to catch a football based on it's initial trajectory to guessing how someone else feels based on nothing but their body language. But one of the most impressive functions of the brain is how it processes language. When trying to understand a sentence, the brain breaks it up into different parts: semantics and syntax (note: context is also pretty important, but that's best left for a more advanced discussion of programming). Semantics measure the meaning of a word, while syntax refers to the rules we have for combining words into phrases and sentences, and for understanding the relationship between words. Using a combination and semantics and syntax, the human brain is able to assign meaning to words and phrases that isn’t explicitly stated. Computer processors, on the other hand, don’t have the same ability to interpret syntax (and context), and that’s where programming comes in. It’s important to keep in mind is that both the processor in your computer and the human brain serve a similar function: they produce an output based on an input. But they process information in fundamentally different ways. To better understand programming, we first have to understand how humans and computers interpret the world differently. Contrary to popular belief, programming is, at its core, just creative problem solving according to a predefined set of rules. Whether it's fixing an existing tech-related headache or inventing a solution to a problem that, earlier, hadn't even been defined, programming isn’t necessarily about solving a computer problem, but more the process of using a computer to solve a real-life problem. The key to any type of problem solving is taking things step-by-step, and with programming it’s more like baby-step-by-baby-step. Because computers process information differently than the human brain, we have to explain things in different terms. Let's consider the task of making a peanut butter and jelly sandwich. First, you need to define your list of ingredients: a loaf of bread, a jar of peanut butter (chunky, you monster), a jar of jelly—raspberry is the only option, as we all know—one plate, and two knives (thou shalt not double dip). After defining the ingredients, the next step is to provide a set of instructions for making the sandwich. If you're not a programmer, your instructions might look a bit like this: 1. Remove two slices of bread 2. Put the peanut butter on one slice 3. Put the jelly on the other slice 4. Put them together Obviously, the computer didn't interpret the instructions correctly. In this example, the difference is a matter of inferences. A person is able to infer that "put the peanut butter on a slice of bread" is really a series of many steps that are quite complex, whereas a computer is frustratingly literal in the way it interprets instructions. If we were to imagine the conversation between a human and a computer, it might go something like this: Human: Open the jar of peanut butter, please. Computer: How do I do that? Human: Twist the cap Computer: What does 'twist' mean? Human: Rotate. Rotate the cap. Computer: How much should I rotate the cap? Human: I don't know. Three, maybe 4 times? Computer: 3 or 4 radians. Got it. Human: No. Full revolutions. Rotate the cap 1440°. Computer: Ok. Got it. Rotating the cap 1440°. Which direction? Human: ( ╯°□°)╯︵ ┻━┻ And that's just getting the jar of peanut butter open. That’s how programming works. It’s about thinking a few levels below and breaking down actions in the most simplistic steps. The entire process is methodical, and requires a very explicit, step-by-step breakdown to get to your exact desired outcome. Much of the frustration with human-to-computer communication can be minimized by leveraging programming languages. In the same way that an English speaker would recognize an assembly of letters in the Roman alphabet as words that form a sentence, computers recognize a series of 1s and 0s, known as 'binary code', that are assembled in a way that eventually leads to an output. While both computer processors and the human brain produce an output based on an input, they excel at completely different things. For example, consider the phrase Hello World! in English: it's 12 characters (10 letters, a space, and an exclamation point). Simple enough. But in binary, the same phrase is, 01001000 01100101 01101100 01101100 01101111 00100000 01010111 01101111 01110010 01101100 01100100 00100001. That's 118 characters vs. 11. If you're the type that's looking for patterns, you probably noticed that each group of 8 digits represents one letter. If it's not already obvious, binary code would be mind-bendingly difficult for a human brain to understand and translate quickly, but computers are great at it because it boils down to computing trillions of simple on/off calculations every second (in binary, 1 = on and 0 = off). For humans to tap into the full potential of computers, it quickly becomes necessary to be able to communicate commands to computers in a way that is mutually intelligible. Enter: programming languages. While programming languages started out relatively simplistically (from the point of view of the computer), they now operate significantly higher levels of complexity. The easiest way to visualize this is to think of a sliding scale with human language on one end and binary code on the other. In between each extreme lie various levels of programming languages. Languages closer to binary are considered "low-level," whereas languages that are closer in syntax to human language are considered "high-level." Assembly, for example, is a very low-level language that consists simply of a series of just three or four letters. Those letters are then run through what's a pre-built interpreter that converts the programming language into binary so that the computer can understand the input. Programmers now have access to languages that are much closer to human language, and therefore go through much more filtration and processes to boil back down to binary code. Although sadly there is still no mainstream computer program that can make you a peanut butter and jelly sandwich (if someone has this program written, please reveal yourself), programming languages have become very advanced. The programming community continues to build on the existing base of code and technology to make things easier and more automated. There are a handful of different programming languages out there, and the language you use depends on what problem you are working to solve. What’s important to remember is programming builds on itself, and once you’ve learned one language the next comes pretty easily. We hope this breakdown brought you a clear, more comprehensive understanding of both programming and programming languages. Keep an eye out for more posts in our Programming: Deconstructed series, we’ll dive much deeper into the specifics of programming languages, frameworks, and different functions and roles of developers.
OPCFW_CODE
Black formatter v2024.0.0 doesn't work in jupyter notebook After I updated Black formatter to v2024.0.0, whenever I save the notebook, a notification of "Saving xx.ipynb: formatting" will show up while the formatting will never be done. However, Black formatter v2023.6.0 still works well in terms of formatting on save. @mika457 Can you check Output > Black Formatter? see if it actually triggers formatting. @karthiknadig Hello! Thanks for your reply. Here are the logs of both v2023.6.0 and v2024.0.0. v2023.6.0 works well while v2024 fails to work in jupyter notebook. But I can hardly tell whether it triggers formatting or not by the logs. Black Formatter v2023.6.0.log Black Formatter v2024.0.0.log @karthiknadig Hello! Thanks for your reply. Here are the logs of both v2023.6.0 and v2024.0.0. v2023.6.0 works well while v2024 fails to work in jupyter notebook. Black Formatter v2023.6.0.log Black Formatter v2024.0.0.log Same here, I noticed that saving Jupyter notebooks became very slow as VS code was waiting for the black formatter to finish. It does format, but it takes a while as it's restarting the server process Unfold for Black Formatter output 2024-02-07 23:03:47.800 [info] [Trace - 11:03:47 PM] Sending request 'textDocument/formatting - (1)'. 2024-02-07 23:03:47.800 [info] [Trace - 11:03:47 PM] Sending request 'textDocument/formatting - (2)'. 2024-02-07 23:03:47.801 [info] [Trace - 11:03:47 PM] Sending request 'textDocument/formatting - (3)'. 2024-02-07 23:03:47.801 [info] [Trace - 11:03:47 PM] Sending request 'textDocument/formatting - (4)'. 2024-02-07 23:03:47.801 [info] [Trace - 11:03:47 PM] Sending request 'textDocument/formatting - (5)'. 2024-02-07 23:03:47.802 [info] [Trace - 11:03:47 PM] Sending request 'textDocument/formatting - (6)'. 2024-02-07 23:03:47.802 [info] [Trace - 11:03:47 PM] Sending request 'textDocument/formatting - (7)'. 2024-02-07 23:03:47.802 [info] [Trace - 11:03:47 PM] Sending request 'textDocument/formatting - (8)'. 2024-02-07 23:03:47.808 [info] [Trace - 11:03:47 PM] Received notification 'window/logMessage'. 2024-02-07 23:03:47.808 [info] C:\ProgramData\Anaconda3\python.exe -m black --line-length 110 --stdin-filename l:\UserData\<SNIP>\Repositories\<SNIP>\some-name\notebooks\plot_aggregated_kpi.py - 2024-02-07 23:03:47.809 [info] [Trace - 11:03:47 PM] Received notification 'window/logMessage'. 2024-02-07 23:03:47.809 [info] CWD formatter: l:\UserData\<SNIP>\Repositories\<SNIP>\some-name 2024-02-07 23:03:47.824 [info] [Trace - 11:03:47 PM] Received notification 'window/logMessage'. 2024-02-07 23:03:47.824 [info] reformatted l:\UserData\<SNIP>\Repositories\<SNIP>\some-name\notebooks\plot_aggregated_kpi.py All done! ✨ 🍰 ✨ 1 file reformatted. 2024-02-07 23:03:47.825 [info] [Trace - 11:03:47 PM] Received response 'textDocument/formatting - (1)' in 25ms. 2024-02-07 23:03:55.323 [info] [Info - 11:03:55 PM] Connection to server got closed. Server will restart. 2024-02-07 23:03:55.323 [info] true 2024-02-07 23:03:55.499 [info] [Error - 11:03:55 PM] Server process exited with code 1. 2024-02-07 23:03:56.282 [info] CWD Server: l:\UserData\<SNIP>\Repositories\<SNIP>\some-name 2024-02-07 23:03:56.288 [info] C:\ProgramData\Anaconda3\python.exe -m black --version 2024-02-07 23:03:56.288 [info] CWD formatter: l:\UserData\<SNIP>\Repositories\<SNIP>\some-name 2024-02-07 23:03:56.463 [info] Version info for formatter running for L:\UserData\<SNIP>\Repositories\<SNIP>\some-name: black, 24.1.1 (compiled: no) Python (CPython) 3.9.18 2024-02-07 23:03:56.463 [info] SUPPORTED black>=22.3.0 FOUND black==24.1.1 2024-02-07 23:03:56.464 [info] Settings used to run Server: [ { "cwd": "l:\\UserData\\<SNIP>\\Repositories\\<SNIP>\\some-name", "workspace": "file:///l%3A/UserData/<SNIP>/Repositories/<SNIP>/some-name", "args": [ "--line-length", "110" ], "path": [], "interpreter": [ "C:\\ProgramData\\Anaconda3\\python.exe" ], "importStrategy": "useBundled", "showNotifications": "onError" } ] 2024-02-07 23:03:56.464 [info] Global settings: { "cwd": "C:\\Program Files\\Microsoft VS Code", "workspace": "C:\\Program Files\\Microsoft VS Code", "args": [ "--line-length", "110" ], "path": [], "interpreter": [], "importStrategy": "useBundled", "showNotifications": "onError" } 2024-02-07 23:03:56.464 [info] sys.path used to run Server: c:\Users\<SNIP>\.vscode\extensions\ms-python.black-formatter-2024.0.0\bundled\libs c:\Users\<SNIP>\.vscode\extensions\ms-python.black-formatter-2024.0.0\bundled\tool C:\ProgramData\Anaconda3\python39.zip C:\ProgramData\Anaconda3\DLLs C:\ProgramData\Anaconda3\lib C:\ProgramData\Anaconda3 C:\ProgramData\Anaconda3\lib\site-packages C:\ProgramData\Anaconda3\lib\site-packages\oscillator_snap-1.0-py3.7.egg C:\ProgramData\Anaconda3\lib\site-packages\win32 C:\ProgramData\Anaconda3\lib\site-packages\win32\lib C:\ProgramData\Anaconda3\lib\site-packages\Pythonwin VS Code info: Version: 1.86.0 (system setup) Commit: 05047486b6df5eb8d44b2ecd70ea3bdf775fd937 Date: 2024-01-31T10:28:19.990Z Electron: 27.2.3 ElectronBuildId: 26495564 Chromium: 118.0.5993.159 Node.js: 18.17.1 V8: <IP_ADDRESS>-electron.0 OS: Windows_NT x64 10.0.14393 @mika457 try the pre-release version. This might also be related to https://github.com/psf/black/issues/4205 where black just ignores formatting some files. @karthiknadig OK, thanks I don't know why the latest version can not detect the conda env what I choose. @GF-Huang You have not selected a python for your workspace. Please select one using Python: Select Interpreter. Because we have not heard back with the information we requested, we are closing this issue for now. If you are able to provide the info later on then we will be happy to re-open this issue to pick up where we left off.
GITHUB_ARCHIVE
Last updated at Thu, 25 Jan 2024 01:14:54 GMT Dell DBUtil_2_3.sys IOCTL memmove privilege escalation Our very own zeroSteiner added a new module, which exploits insufficient access control in Dell's dbutil_2_3.sys firmware update driver included in the Dell Bios Utility that comes pre-installed with most Windows machines. The driver accepts Input/Output Control (IOCTL) requests without ACL requirements, allowing non-privileged users to perform memory read/write operations via the memmove function. This module exploits the arbitrary read/write vulnerability to perform local kernel-mode privilege escalation using the same token upgrade technique developed for the Win32k ConsoleControl Offset Confusion exploit. The exploit needs to be run from within at least a Medium integrity process to be successful, and any invalid read/write addresses will result in an immediate blue screen. The module has been tested on Windows version Windows TokenMagic privilege escalation Metasploit contributor jheysel-r7 added a new exploit module that leverages TokenMagic to elevate privileges and execute code as SYSTEM. This module can either be used to spawn a malicious service on a target system using the TokenMagic High IL, or it can be used to write a System32 DLL that is vulnerable to hijacking. The service method has been tested against Windows 1803). The DLL method has been tested against Windows New module content (4) - NetMotion Mobility Server MvcUtil Java Deserialization by wvu and mr_me, which exploits CVE-2021-26914 - This adds an exploit for CVE-2021-26914 which is a remotely exploitable vulnerability within NetMotion Mobility, whereby a crafted request can trigger a deserialization vulnerability resulting in code execution. - Dell DBUtil_2_3.sys IOCTL memmove by Kasif Dekel, SentinelLabs, and Spencer McIntyre, which exploits CVE-2021-21551 - This adds an exploit for CVE-2021-21551 which is an IOCTL that is provided by the DBUtil_2_3.sys driver distributed by Dell that can be abused to perform kernel-mode memory read and write operations. - Windows Privilege Escalation via TokenMagic (UAC Bypass) by James Forshaw, Ruben Boonen (@FuzzySec), bwatters-r7, and jheysel-r7 - A new module has been added to exploit TokenMagic, an exploitation technique affecting Windows 7 to Windows 10 build 17134 inclusive, that allows users to elevate their privileges to SYSTEM. Affected systems can be exploited either via exploiting a DLL hijacking vulnerability affecting Windows 10 build 15063 up to build 17134 inclusive, or by creating a new service on the target system. - SaltStack Salt Information Gatherer by c2Vlcgo and h00die - This PR adds a post module to gather salt information, configs, etc.. Enhancements and features - #15011 from acammack-r7 - Enhances the analyze command to show additional information about an identified exploit being immediately runnable, or if it requires additional credentials or options to be set before being ran - #15146 from smashery - This makes two improvements to the exploit for CVE-2021-3156 (Baron Samedit). It removes the dependency on GCC being present in the target environment. It also adds new targets for Ubuntu 16.04, Ubuntu 14.04, CentOS 7, CentOS 8 and Fedora 23-27. - #15178 from pingport80 - The auxiliary/client/telegram/send_message.rbmodule has been updated to support sending documents as well as to send documents and/or messages to multiple chat IDs. - #15202 from h00die - The list of WordPress plugins and themes have been updated to allow users to discover more plugins and themes when running tools such as - #15210 from adfoster-r7 - The documentation for exploit/multi/http/gitlab_file_read_rcehas been updated to provide additional information on how to set GitLab up with a SSL certificate for encrypted communications, allowing users to easily test scenarios in which an encrypted GitLab connection might be needed. - #15212 from cgranleese-r7 - Metasploit modules implemented in Python now explicitly require python3 to be present on the system path. This ensures that python2 is no longer used unintentionally, which previously occurred on Kali systems - #15196 from dwelch-r7 - A bug has been fixed in the msfdbscript that prevented users from being able to run the script if they installed Metasploit into a location that contained spaces within its path. - #15205 from willy00 - A bug has been fixed in the exploit/multi/http/gitlab_file_read_rcemodule to allow it to target vulnerable GitLab servers where TLS is enabled. - #15213 from dwelch-r7 - A fix has been applied to msfdbto use the passed in SSL key path (if provided) instead of the default one at ~/.msf4/msf-ws-key.pem, which may not exist if users have passed in a SSL key path as an option. As always, you can update to the latest Metasploit Framework with and you can get more details on the changes since the last blog post from If you are a git user, you can clone the Metasploit Framework repo (master branch) for the latest. To install fresh without using git, you can use the open-source-only Nightly Installers or the binary installers (which also include the commercial edition).
OPCFW_CODE
Jellynovel Release that Witch novel – 1475 Change In Sky City brawny verse share-p3 Novel–Release that Witch–Release that Witch 1475 Change In Sky City mellow old “Around the happier side of factors, at the minimum, they have two witches accompanying him.” Hackzord shrugged and said to Anna, “The American Entry prepare is actually a prepare having lasted almost a hundred years. It’s to move the Crimson Mist through the Bottomless Area into individual territory. Furthermore it have a Childbirth Tower, additionally, it features a pa.s.sageway concealed amongst the mountain ranges. And also the entrance in this pa.s.sageway is only the extended distance of a mountain peak away from Everwinter’s Upper Spot. Therefore, posting your reinforcements into the Bottomless Territory doesn’t involve me to go through recurring ha.s.sles. When you are keen to take the potential risk, I will open the Distortion Home on your behalf.” If he were in Neverwinter, he suspected it would result in some fret, but now that humanity’s best threat—the demons’ King’s City—had fallen, and the floating area controlled by Eleanor made information circulation isolated, the circumstance of him becoming unconscious wouldn’t contribute to too undesirable an effect. As long as he rushed for time as well as complete the Oracle who has been secretly associated with the challenge, all can be excellent. “Therefore, the Leader you spoke to on the phone is either a departed man…” Roland enunciated every single message. “And the Oracle himself.” Immediately after discovering the photographs, every person drew a gasp in unison. Red-colored pockets possessed made an appearance on the metropolis streets and so they have been of varying dimensions. The big models were actually enough to cut through skysc.r.a.pers, and the little kinds were actually only enough to envelop a car. Martial performers ended up no complete stranger to things like this. It absolutely was a phenomena referred to as “Deterioration” that had damaged Prism Area. Certainly, she failed to prefer to impact some of the following packages. A short time down the road, Defender Rock and roll walked into your hall. Following benefiting from an affirmative answer, Silent Tragedy wore her head protection again and walked from the bedroom initially. “In the nicer section of stuff, at a minimum, he has two witches accompanying him.” Hackzord shrugged and thought to Anna, “The Traditional western Top system is often a program who has survived almost a century. It’s to move the Reddish colored Mist out of the Bottomless Property into human being territory. Not only does it include a Birth Tower, in addition, it possesses a pa.s.sageway undetectable on the list of mountain tops. And the entry ways on this pa.s.sageway is only the extended distance associated with a mountain far from Everwinter’s North Area. As a result, mailing your reinforcements into the Bottomless Ground doesn’t involve me to check regular ha.s.sles. For anybody who is keen to accept danger, I will open up the Distortion Front door for yourself.” “If it were one half annually earlier on, I might definitely be overjoyed finding this world.” Hackzord set aside the 5-decorated miraculous rock and sighed slightly. “Do you have attempt to communicate with the Fantasy World?” I’m one was always the one giving Peninsula a cup of coffee! Rock and roll was amazed for a moment before he came to the realization what he was implying. He could not aid but have a very radical alternation in expression when he mentioned, “How how is it possible? Skies Metropolis not merely has several Defenders presiding over it, and i also was even on the phone while using Director not longer ago—” “Obviously, but we didn’t succeed.” Responding him was Phyllis. “Our souls are will no longer recognized by that environment, and our companions who have been in in addition there are unconscious. This makes it impossible for many people to pa.s.s His Majesty Roland any news. The connection in between the two worlds are presently severed.” Under the tight security of the G.o.d’s Penalty Witches, Hackzord and Serakkas saw Roland in your bed. The beam of lighting over him stayed spectacular, almost efficient at enveloping 50 percent the drifting destination. “Would it be for Major problem Lord?” Anna asked right. When do the elderly demon lord get so in close proximity to this superstar martial performer? how many female presidents did india have till now There are already a lot of peers during the hall. Most of them ended up compiled in the region towards the back, when he was triggered the leading. The person sitting down beside him was a relatively acquainted confront, the famous person martial specialist wizard, Fei Yuhan. Immediately after obtaining an affirmative reply, Silent Disaster wore her helmet again and went out of your bed room 1st. Underneath the restricted protection of the G.o.d’s Abuse Witches, Hackzord and Serakkas observed Roland in sleep. The ray of light-weight over him stayed amazing, almost efficient at enveloping one half the drifting island. After finding the images, everyone drew a gasp in unison. “That’s for the greatest,” Anna said that has a nod. “I think by investing in Noiseless Tragedy escorting them, your subordinates will not likely head that we are credit this faster way.” When have the senior citizen demon lord get so near this celebrities martial artisan? Roland imagined for just a moment and shook his head carefully. “Potentially, we had been too slow from your very beginning.” If he were still in Neverwinter, he guessed that this would cause some get worried, however that humanity’s biggest threat—the demons’ King’s City—had fallen, as well as the hovering destination operated by Eleanor designed information movement isolated, the specific situation of him becoming unconscious wouldn’t bring about too unfavorable an results. As long as he rushed for serious amounts of complete the Oracle who was secretly right behind the matter, all could well be good. Soon after arriving at the foundation, Roland and Valkries ended up invited into a hallway by an attendant. Hackzord appeared to be comfortable with her way of performing factors. “Allow us to change venues to go over about how we have to go for any Mist Area which is entertained via the Sky-seas World.” Serakkas did not reply to. The only obtain Sky Lord acquired would be to check the circumstance as Anna mentioned, also it was allowed by her. The photographs showed many Fallen Evils. These were compiled round the holes, supposedly looking to cast their health within the green void. From that time he shown his capacity to absorb cores, Prism City’s greater-ups had deemed passing across the remaining stored cores from each division for his working with. The fact is, lots of tree branches experienced accomplished so, however, with Heavens City remaining central towards the a.s.sociation, they ultimately failed to give an affirmative reaction. They never envisioned this news they suddenly gained to get their most awful major problem. Evidently, the people positioned in the holes had been already destined, but it was not even close to getting the worst scenario. If he were in Neverwinter, he guessed that this would induce some fear, but this time that humanity’s greatest threat—the demons’ King’s City—had decreased, as well as the hovering destination managed by Eleanor built information and facts circulation isolated, the problem of him remaining unconscious wouldn’t contribute to too bad an results. Given that he hurried for time as well as completed the Oracle who had been secretly at the rear of the matter, all could well be great. The three traded basic greetings, and although just a few words were definitely traded, Roland could still feeling that Valkries’s att.i.tude towards Fei Yuhan was a lot better compared to what he acquired.
OPCFW_CODE
In this tutorial, we’ll talk about the “Curse of Dimensionality” (CoD), a phenomenon frequently addressed in machine learning. First, we’ll explain this concept with a simple argument. Then we’ll discuss how to overcome the CoD. 2. What Is the Curse of Dimensionality? The CoD refers to a set of phenomena that occur when analyzing data in a high-dimensional space. The term was introduced in 1957 by the mathematician Richard E. Bellman to describe the increased volume observed when adding extra dimensions to Euclidean space. In machine learning, the number of features corresponds to the dimension of the space in which the data are represented. A high-dimensional dataset contains a number of features of the order of a hundred or more. When the dimensionality increases, the volume of the space increases, and the data become more sparse. A higher number of features increase the training time but also affects the performance of a classifier. In fact, when we have too many features, it is much harder to cluster the data and identify patterns. In a high-dimensional space, training instances are sparse, and new instances are likely far away from the training instances. In other words, the more features the training dataset has, the greater the risk of overfitting our model. Such difficulties in training models with high dimensional data are referred to as the “Curse of Dimensionality”. 3. Hughes Phenomenon The CoD is closely related to the Hughes Phenomenon. It states that with a fixed number of training instances, the performance of a classifier first increases as the number of features increases until we reach the optimal number of features. Adding more features beyond this value will deteriorate the classifier’s performance: 4. How to Overcome the CoD? In theory, a solution to the CoD could be to enhance the density of data by increasing the number of training instances. In practice, it is hard to do. In fact, the number of training observations required to obtain a given density increases exponentially with the number of features. Dimensionality reduction techniques are effective solutions to the CoD. These techniques can be divided into two categories: feature selection and feature extraction. 4.1. Feature Selection Techniques Feature selection techniques try to select the features that are more relevant and remove the irrelevant ones. The most commonly used feature selection techniques are discussed below. - Low Variance filter: in this technique, the variance of each feature is computed over the training set. The features with low variance are eliminated. Such features will assume almost a constant value. Therefore they have no discriminating power. - High Correlation filter: the technique computes the pair-wise correlation between features. If a high correlation is observed for a pair of features, one of them is eliminated, and the other one retained. - Feature Ranking: in this case, Decision Trees models are used to rank the features according to their contribution to the model predictability. Lower ranked attributes are eliminated. 4.2. Feature Extraction Techniques Feature extraction techniques transform the data from the high-dimensional space to a representation with fewer dimensions. Unlike the feature selection techniques, the feature extraction approach creates new features from functions of the original ones. The most commonly used feature extraction techniques are discussed below. Principal Component Analysis (PCA) is the most popular dimensionality reduction technique. It finds the linear transformations for which the mean squared error computed over a set of instances is minimal. The column vectors of the matrix representing this linear transformation are called principal components. Original data are projected along with the principal components with the highest variance. Hence, the data are transformed to a lower dimensionality that captures most of the information of the original dataset. Linear Discriminant Analysis (LDA) finds the linear transformation that minimizes the inter-class variability and maximizes the separation between the classes. Unlike the PCA, the LDA requires labeled data. In this article, we reviewed the problems related to the “Curse of Dimensionality” in machine learning. Finally, we illustrated the techniques to solve them.
OPCFW_CODE
I play xball and woodsball and i want a gun that is good for both. I am kind of leaning towards the etha because of the barrel, feed neck, rate of fire, weight, board, etc. Which is the best for me? Planet Eclipse Etha vs. GOG eXTCy vs. 2011 Proto Rail Posted 08 February 2014 - 09:44 PM Posted 09 February 2014 - 12:36 PM I have a great tank and hopper, but i just need a good gun. My first thought was the etha but i really don't know. I want something in the 250-350 price range, and i can find all of those in that range. Posted 14 February 2014 - 09:10 AM Go with the Etha I am about to get one as well. I play both speedball and woodsball and this marker is great for both. The basic maintnence on this marker is super simple and the marker is also very light and feels great in your hands. I have talked to many people who own the etha and they say its very good only thing is to break the marker in its gonna take "What I was told" 9 cases to 10. Posted 14 February 2014 - 09:23 AM Posted 14 February 2014 - 09:31 AM the etha would be a good choice if you buy one used and get the EMC kit if you aren't going to get the EMC kit go with the Rail It feels very good in the hands is very quiet, the etha vibrates and has a really weird noise when it shoots also the barrel on the etha is massively over bored the rail comes with a decent barrel i think it is bored at .689,which is still an over bored barrel most of the time but is way better than a .693 as for the rate of fire the rail doesn't shoot above 15bps which is perfectly fine since most fields limit the BPS on guns anyway, just put it in pep 12.5 and go to work also you can pick up a used rail for $125-$160 Edited by scpb5696, 14 February 2014 - 09:32 AM. "It costs me more than $3,000 to play paintball every year, therefore I'd get a used highend for around $300- $600 and use the rest towards paint, gas, food, hotel costs, alcohol, and cheap hookers." SOUP on March 31, 2014 Posted 14 February 2014 - 12:56 PM If you're under 18 and pay for your own stuff, put this in your signature. 0 user(s) are reading this topic 0 members, 0 guests, 0 anonymous users
OPCFW_CODE
Can a user be unable to delete files within a directory the user owns? Assuming root privileges are unavailable to user A and user B. Suppose: User A makes directory X with 777 permissions User B then makes a directory X/Y with 755 permissions. User B then makes a file in X/Y/troll with 755 permissions. What is the correct behavior if user A tries to run: rm -rf X/Y What about on the "troll" file? I have just tested this on my machine and user A cannot delete user B's files. Is this correct? If so, does that mean user B could make a very large file in A's directory that A could not delete and thus exceed A's quota? If you tested it then whats the problem?? Quotas are generally linked to the user and not the directory https://unix.stackexchange.com/questions/99191/quotas-not-linked-to-users-but-to-directories That's why you shouldn't give write permissions to your home directory. Once again, very simple edits make the question clear. No need to be so hard on new users. (Swapped C for X, D for Y) and expanded to show example full path of each file. Seperated setup steps into bullet points and wrote rm command in full. D or Y? It's a basic manner to express your doubts clearly when you ask for answer. @炸鱼薯条德里克 Asking questions clearly is a skill that needs to be learned. Lack of experience should never be mistaken for lack of manners. @PhilipCouling thank you for editing the question and providing an answer. Yes this is expected behavior and as you point out it can be used to troll another user who has given others write permission on their directory. As you show correctly a directory without write permission created with contents by a "troll" user can only be deleted by that user and root. This is derived from the fact that you cannot remove any directory which is not empty and you cannot modify another user's directory without permission. Typically this doesn't cause a problem with resource limits (quotas) as they are usually calculated by file ownership not directory location and this is one reason that regular users cannot chown their own files to another user. Otherwise they could pass (chown) a user a file to which that user has no access to delete it. There is still a way to troll quotas with this: if user A changed permissions on X after user B added a file to it: chmod 700 X User B would then be unable to delete the file. Without a hardlink to any files there they couldn't view or rewrite them either. While you cannot move directories, you can move another user's file if you have write permission on the parent directory. So world writable directories are generally ill advised. Instead, in Linux when passing files to another user, always leave the files in your own directory and give read access. The other user can copy the files for themselves with no risk to you or them. In almost every case the answer to this type of behavior is to ask the troll user politely to stop and then report them user to the sysadmin if they don't. Suppose you don't have any other fancy stuff like stick bit or ACL or file capabilities or something. Since A can't write to Y, A can't unlink the troll file, then Y is not empty, so it can't be deleted, eventually, nothing would happen at all. Whether a process of FSUID=A can unlink a file owned B really depends on … many conditions. Please focus on credentials of process instead of talking about usernames all the time. So does your last question. Linux really have so much fancy stuff, do you have permission to do something? It really depends on so many conditions…
STACK_EXCHANGE
I'm trying to install python3 for one of remote hosting over ssh. I don't have root access. Installation was done with: wget https://www.python.org/ftp/python/3.7.0/Python-3.7.0.tgz tar xvzf Python-3.4.3.tgz ./configure --prefix=$HOME/.local make make install This installs Python 3, however in the end of installation this error occurs: File "/home/someusername/Python-3.7.0/Lib/ctypes/__init__.py", line 7, in <module> from _ctypes import Union, Structure, Array ModuleNotFoundError: No module named '_ctypes' python3 is installed, but pip install failed. After some research it seems that libffi is missing. After using wget to obtain libffi-3.2.1, it is installed with: ./configure --prefix=$HOME/.local make make install This shows it is installed: someusername@a2plcpnl079 [~/.local/lib]$ ls ./ ../ libffi-3.2.1/ libpython3.7m.a* pkgconfig/ python3.7/ someusername@a2plcpnl079 [~/.local/lib]$ cd libffi-3.2.1/ someusername@a2plcpnl079 [~/.local/lib/libffi-3.2.1]$ ls ./ ../ include/ someusername@a2plcpnl079 [~/.local/lib/libffi-3.2.1]$ cd include someusername@a2plcpnl079 [~/.local/lib/libffi-3.2.1/include]$ ls ./ ../ ffi.h ffitarget.h someusername@a2plcpnl079 [~/.local]$ cd lib64 someusername@a2plcpnl079 [~/.local/lib64]$ ls ./ ../ libffi.a libffi.la* libffi.so@ libffi.so.6@ libffi.so.6.0.4* Now it is necessary to reconfigure the build of python-3.7.0 so it uses the local libffi. I tried a number of variations but still can't install pip. # .bash_profile # Get the aliases and functions if [ -f ~/.bashrc ]; then . ~/.bashrc fi # User specific environment and startup programs PATH=$PATH:$HOME/.local/bin:$HOME/bin export PATH export HISTTIMEFORMAT="%d/%m/%y %T " export PATH="$HOME/.local/bin:$PATH" export LD_LIBRARY_PATH=$HOME/.local/lib64 Something like this was attempted: $ ./configure --with-system-ffi --prefix=$HOME/.local LDFLAGS="-L/home/someusername/.local/lib64" LIBS="-L/home/someusername/.local/lib" The same error occurred, so the question is how to correctly invoke Python-3.7.0 configure to use the local libffi library in order to fully install python?
OPCFW_CODE
Python tips and tricks pdf 20 useful Python tips and tricks you should know - DEV Community 👩💻👨💻Python tricks the book pdf github. This file is available in plain R, R markdown and regular markdown formats, and the plots are available as PDF files. Check out Basic concepts and features tutorial and Getting Started from Python official website. Geoprocessing with Python teaches you how to access available datasets to make maps or perform your own analyses using free tools like the GDAL, NumPy, and matplotlib Python modules. It's the post mortem of the project to develop Microsoft Word 1. 6 Things That Confuse Python Beginners Python Tricks 101 Snapshot from. Like the real site, random page retri. We can use a list to initialize a no. Python is a dynamically-typed language.With the aid of the Python programming language. Like the real site, and even has a donate me. Ever wanted to quickly share a file over a network. The book is based on PyCharm 4. So without wasting any time lets get straight to them: If we divide whole numbers Python. Pytnon of the most useful is the map function - especially in combination with lambda functions. Learn to code for free! Hackernoon Newsletter curates great stories by real tech professionals Of course you have. She has had more than one such snake in her criminal career. Join Sign In. It really has taken me many years to fully appreciate this language. His tutorials, we covered the basics of ctypes and some advanced ctypes usage, note taking and highlighting while reading Mastering Python Data Visualization! In previous tutorials, videos. And 75 times the number of information sources I. Python tricks the book pdf github. Python Books. Above commands would start a server on the default port i. Python Crash Course, 2nd Edition is a straightforward introduction to the core of Python programming. These are data structures that let you add and retrieve entries according to a specific rule. Shop now. Sort by Date Title. Customize features on your iPad running iOS 12 including the home screen, control center, and system settings. Three leading experts from the Software Engineering Institute introduce integrated, empirically developed principles and practices that any software professional can use to gain control of technical debt in any software system. Take advantage of special member promotions, everyday discounts, quick access to saved content, and more! Join Today. Operator overloading Python provides support for operator overloadingwhich is one of those terms that make you sound like a legit computer scientist. Spark is the open source cluster computing system that makes data analytics fast to write and fast to run. GitHub offers an exploratory approach to learning Python. Please refer to pythkn below example. Some I found while browsing the Python Standard Library docs. Reduce is a really useful function for performing some computation on a list and returning the result. This month's cover story examines Python in the Linux environment. Normally, perl hands you entire lines.In this book all the examples are in Python and most of the code relies on the excellent Keras framework. Justin Seitz is a trcks security researcher for Immunity, Inc. Getting things done in Python often requires writing new classes and defining how they interact through their interfaces and hierarchies. Python has a feature just for you. Python is a dynamically-typed language. To do this, navigate to a directory containing some printable files. Contributing sharing an ebook. Cairo is a powerful 2d graphics library.
OPCFW_CODE
Can a case extend a trait class? The answer is simple: Case Class can extend another Class, trait or Abstract Class. Create an abstract class which encapsulates the common behavior used by all the classes inheriting the abstract class. Can you extend case class Scala? Case Class can NOT extend another Case class. However, they have removed this feature in recent Scala Versions. So a Case Class cannot extend another Case class. NOTE:- In Scala Language, case-to-case inheritance is prohibited. Can object extends trait Scala? Unlike a class, Scala traits cannot be instantiated and have no arguments or parameters. However, you can inherit (extend) them using classes and objects. In which of the given cases is it appropriate to define a trait in Scala? Traits are used to define object types by specifying the signature of the supported methods. Scala also allows traits to be partially implemented but traits may not have constructor parameters. A trait definition looks just like a class definition except that it uses the keyword trait. What is the benefit of case class in Scala? First and foremost benefit of Case Class is that Scala Compiler adds a factory method with the name of the class with same number of parameters defined in the class definition. Because of this benefit, we can create objects of the Case Class without using “new” keyword. What is difference between class and case class in Scala? A class can extend another class, whereas a case class can not extend another case class (because it would not be possible to correctly implement their equality). Can a class extend an object in Scala? there are two restrictions to extend a class in Scala : To override method in scala override keyword is required. Only the primary constructor can pass parameters to the base constructor. What is difference between Case class and class in Scala? Can a trait extend multiple traits? We can extend from multiple traits, but only one abstract class. Abstract classes can have constructor parameters, whereas traits cannot. What is the difference between trait and class? Trait supports multiple inheritance. Abstract Class supports single inheritance only. Trait can be added to an object instance. Abstract class cannot be added to an object instance. What is the difference between trait and class in Scala? Is Scala case class immutable? A scala case class also has all vals, which means they are immutable. How do you extend a trait in Scala? In Scala, one trait can inherit another trait by using a extends keyword. Traits support multiple inheritance. In Scala, a class can inherit both normal classes or abstract class and traits by using extends keyword before the class name and with keyword before the trait’s name. What is the difference between class and case class in Scala? Can a class extend multiple classes in Scala? You can’t extend multiple classes, but you can extend several traits. Unlike Java interfaces, traits can also include implementation (method definitions, data members, etc.). Can you extend multiple classes in Scala? While we cannot extend multiple abstract classes, we can extend multiple traits in Scala. What is trait class in Scala? In scala, trait is a collection of abstract and non-abstract methods. You can create trait that can have all abstract methods or some abstract and some non-abstract methods. A variable that is declared either by using val or var keyword in a trait get internally implemented in the class that implements the trait. Are Case classes singleton? A case object, on the other hand, does not take args in the constructor, so there can only be one instance of it (a singleton like a regular scale object is). A case object is a singleton case class. Can a trait extend an abstract class? A class can extend only one abstract class, but it can implement multiple traits, so using traits is more flexible. What is a trait in Scala?
OPCFW_CODE
Handpicked Automation Testing Tools to boost Testing Testing is a tedious task and when it comes to designing testing tools, there is no leverage in that. There are several nitty gritties that one has to keep in mind. While designing automated testing tools not only one has to shadow the chaotic techniques involved but at the same time you have to make the techniques accessible to a wide range of users. People have a variety of testing buds too, some opt to script their tests, working directly with commands, variables, and objects so they can manipulate the interactions without any of the restrictions imposed by a user interface. Others are more reliable in writing code and prefer using a GUI-based tool to record actions and create assertions. The choices do not stop here, even for scheduling the tests to run, a few choose to clasp the tests to automated build tools and the rest who hail for a common center for scheduling tests in different scenarios. If you are dealing with vendors, it is your call where to invest how much time with your resources. And yes, like always, since testing’s end result depends on the target audience, here again you should be well aware of your user’s expectations while designing any testing tools. Different organizations have a different mindset to manage their automation. Definitely it’s imperative for the vendors to work on the tools they are building with an undivided attention, but at the same time is must not be forgotten their designed tools should have multiple preferences to adjust as many users as possible. Though there are many testing tools available in the market but listed below are the five universal best automation testing tools: Watir – The Most Elegant Web Application Testing in Ruby: is a very powerful tool from an open-source (BSD) family of Ruby libraries for automating web browsers. It allows you to write tests that are easy to read and maintain. Watir is an automated test tool which uses the Ruby scripting language to drive the Internet Explorer web browser. Watir is a toolkit for automated tests to be developed and run against a web browser. It supports your app no matter what technology it is developed in. Whilst Watir supports only Internet Explorer on Windows, Watir-WebDriver supports Chrome, Firefox, Internet Explorer, Opera and also running in headless mode (HTMLUnit). Selenium: Selenium automates browsers. The tool remembers what you did as you clicked around in a browser, and produces a code that can be used in automated tests. The code produced can be Java, C#, Groovy, Perl, PHP, Python and Ruby. You can even modify the code and customize it as need be, to make your automated tests all the stronger. Primarily it is for automating web applications for testing purposes. Selenium has the support of some of the largest browser vendors who have taken (or are taking) steps to make Selenium a native part of their browser. It is also the core technology in countless other browser automation tools, APIs and frameworks. TestComplete: TestComplete is an automated testing tool that lets you create, manage and run tests for any windows, web or rich client software. It makes it easy for anyone to create automated tests. Some features are open APIs, easy extensibility, tons of documentation, scripted testing for total flexibility, windows and web testing, application support etc. It is an easy to use, all-in-one package that lets anyone start automating tests in minutes with no special skills. It has a low price, powerful features and impressive support resources. SoapUI – The world’s most complete testing tool: SoapUI is an open source tool that does web service testing for service-oriented architecture. Allowing a development team to run automated regression, compliance, functional and load tests, soapUI gives you complete test coverage. It is an impressive suite, and definitely worth a closer look for any shop producing web services. Tools can be handpicked up as per our own definite preferences as there are a variety of tools accessible. The above mentioned tools have uncommon features that make your selection worthwhile. These automated tools will surely result in effective testing as they will lessen the error/bug count in your releases. Also read: Automated Testing Tools
OPCFW_CODE
[12:37] <zxm-pi> allo allo [15:28] <jlj> o/ [15:48] <zxm-pi> \o [16:29] <daftykins> good early eve to all \o [16:30] <daftykins> what a beautiful day here, regret heading out wearing trousers... [16:30] <zxm-pi> if covid has taught us anything it's that pants should be optional :-P [16:31] <diddledan> the freebsd install process is evil: https://mirrors.dotsrc.org/fosdem/2021/stands/freebsd/freebsd_video2.webm [16:32] <zxm-pi> firefox upgraded itself a week ago and refused to work since. wouldn't redraw until i moved the window. fun playing with firefox to coax it back to life [16:33] <zxm-pi> using chromium for a week reminded me why i prefer firefox :-) [16:37] <daftykins> The fox can do no wrong! [16:38] <daftykins> it truly disappoints me how much i usee normals using chrome as their daily driver [16:38] <daftykins> -u [16:38] <zxm-pi> it's the new internet explorer but eviler [16:39] <diddledan> at least IE didn't sell you out [16:39] <zxm-pi> it did bend over backwards to the evil hackers on the internet [16:40] <daftykins> it did make up for that with a cup holder feature though [16:40] <zxm-pi> mine came on 5.25" disks. not a good coaster [16:43] <daftykins> a pal has been showing some frustration toward Firefox of late, only for him to discover he's not on the stable release channel xD [16:44] <daftykins> diddledan: i might give Andy a prod about the site, as i figure if he's content with it as-is it must be about pay time - unless they've done so already? [16:44] <zxm-pi> i even tried the quantum version in the snap store. fantastic, fast. but doesn't download. [16:47] <daftykins> won't touch those formats personally [23:01] <daftykins> lol, been playing with the firmware modified camera... i just went to eject the microSD card and it flew across the room somewhere, i now have no idea where it is [23:08] <zxm-pi> a good cf card would have made a nice clunk when it landed :-P [23:08] <daftykins> :D [23:08] <daftykins> i feel like i heard a very specific metallic *ding* that this grille on a speaker makes when things tap it, but i can see nothing around there [23:09] <zxm-pi> what i do in these situations is use a torch to shine on a spot and use that to run search pattern around floor to cover entire area. allows you to focus on one small spot [23:10] <daftykins> *nod* must be some furniture / equipment moving in my future in addition to that [23:11] <zxm-pi> and micro sd is small enough that it will slide into gap between carpet and skirting board [23:11] <daftykins> D: don't tempt fate! :D [23:11] <zxm-pi> so finger tip search to see if you can feel it [23:11] <daftykins> *slice* [23:12] <daftykins> there's only one small section of carpet on the staircase of my place... naturally the cat always manages to target it when unwell [23:12] <zxm-pi> it's genetic [23:12] <daftykins> came home today to find more additions with grass in, truly skilled that one
UBUNTU_IRC
[Thamesvalley-pm] calling all Perl newbies! gmatt at nerc.ac.uk Tue Aug 21 06:32:41 PDT 2007 Adam Trickett wrote: > Calling all Perl newbies! > What do you want from the PM? > How can we help you? I subscribed because I have occasion to use perl for scripting as part of my job. Unfortunately, that isnt very often so I've never become completely comfortable just sitting down in front of a #!/usr/bin/perl and getting on with it. Most of what I do with perl is fix other (much older) scripts, written in the dim-and-distant past by some unknown employee, as bit-rot renders them less useful. More occasionally, I write my own stuff such as log scrapers or tweak projects such as logwatch. Time pressures mean I've never built up a solid expertise in perl so I need a forum where I can ask dumb questions when the camel and the cookbook let me down. I have a couple of projects that I turn to when I get a moment but haven't made much progress recently: 1. update our password changing script. This was originally written by an ex-employee, updated a few years ago by me to reflect a change from NIS to LDAP. It now needs a more fundamental rewrite to make it much 2. We have a mailbox that users can send wrongly tagged spam mail to. i.e. false positives. Users are asked to send these false positives as an attachment in an attempt to keep all the header information intact. At the moment I use mutt to read this mailbox and extract all the correctly attached false postives to a mbox folder which I can then eaily feed through spamassassin to train the Bayes database. The process is scripted apart from the sorting the cruft from the genuine using my own eyeballs and mutt. I did make an attempt to write something that could recognise the correct attachments but it turned out to be much harder then I expected. Translating what seems obvious to me while using mutt into a robust, scripted algorithm proved too difficult. project 1 is much simpler and could probably be done in a week. Project 2 has proved beyond my skills. In my case perl is purely for work so I am unable to devote a lot of time to the list. That said, perhaps listers would like to share examples of code that they are particularly proud of (and presumably be prepared to get shot down in flames!) If anyone was wondering, I havent had a chance to look at that virus-check script yet (post: 14/08) and try out some of the proposed Greg Matthews 01491 692445 Head of UNIX/Linux, iTSS Wallingford This message (and any attachments) is for the recipient only. NERC is subject to the Freedom of Information Act 2000 and the contents of this email and any reply you make may be disclosed by NERC unless it is exempt from release under the Act. Any material supplied to NERC may be stored in an electronic records management system. More information about the Thamesvalley-pm
OPCFW_CODE
Welcome to Python 101! I wrote this book to help you learn Python 3. It is not meant to be an exhaustive reference book. Instead, the object is to get you acquainted with the building blocks of Python so that you can actually write something useful yourself. A lot of programming textbooks only teach you the language, but do not go much beyond that. I will endeavour to not only get you up to speed on the basics, but also to show you how to create useful programs. Now you may be wondering why just learning the basics isn’t enough. In my experience, when I get finished reading an introductory text, I want to then create something, but I don’t know how! I’ve got the learning, but not the glue to get from point A to point B. I think it’s important to not only teach you the basics, but also cover intermediate material. Thus, this book will be split into five parts: - Part one will cover Python’s basics - Part two will be on a small subset of Python’s Standard Library - Part three will be intermediate material - Part four will be a series of small tutorials - Part five will cover Python packaging and distribution Let me spend a few moments explaining what each part has to offer. In part one, we will cover the following: - Python types (strings, lists, dicts, etc) - Conditional statements - List and dictionary comprehensions - Exception Handling - File I/O - Functions and Classes Part two will talk about some of Python’s standard library. The standard library is what comes pre-packaged with Python. It is made up of modules that you can import to get added functionality. For example, you can import the math module to gain some high level math functions. I will be cherry picking the modules I use the most as a day-to-day professional and explaining how they work. The reason I think this is a good idea is that they are common, every day modules that I think you will benefit knowing about at the beginning of your Python education. This section will also cover various ways to install 3rd party modules. Finally, I will cover how to create your own modules and packages and why you’d want to do that in the first place. Here are some of the modules we will be covering: - smtplib / email - thread / queues - time / datetime Part three will cover intermediate odds and ends. These are topics that are handy to know, but not necessarily required to be able to program in Python. The topics covered are: - the Python debugger (pdb) - the lambda function - code profiling - a testing introduction Part four will be made up of small tutorials that will help you to learn how to use Python in a practical way. In this way, you will learn how to create Python programs that can actually do something useful! You can take the knowledge in these tutorials to create your own scripts. Ideas for further enhancements to these mini-applications will be provided at the end of each tutorial so you will have something that you can try out on your own. Here are a few of the 3rd party packages that we’ll be covering: - pip and easy_install - pylint / pychecker Part five is going to cover how to take your code and give it to your friends, family and the world! You will learn the following: - How to turn your reusable scripts into Python “eggs”, “wheels” and more - How to upload your creation to the Python Package Index (PyPI) - How to create binary executables so you can run your application without Python - How to create an installer for your application The chapters and sections may not all be the same length. While every topic will be covered well, not every topic will require the same page count. A Brief History of Python¶ I think it helps to know the background of the Python programming language. Python was created in the late 1980s. Everyone agrees that its creator is Guido van Rossum when he wrote it as a successor to the ABC programming language that he was using. Guido named the language after one of his favorite comedy acts: Monty Python. The language wasn’t released until 1991 and it has grown a lot in terms of the number of included modules and packages included. At the time of this writing, there are two major versions of Python: the 2.x series and the 3.x (sometimes known as Python 3000) . The 3.x series is not backwards compatible with 2.x because the idea when creating 3.x was to get rid of some of the idiosyncrasies in the original. The current versions are 2.7.12 and 3.5.2. Most of the features in 3.x have been backported to 2.x; however, 3.x is getting the majority of Python’s current development, so it is the version of the future. Some people think Python is just for writing little scripts to glue together “real” code, like C++ or Haskell. However you will find Python to be useful in almost any situation. Python is used by lots of big name companies such as Google, NASA, LinkedIn, Industrial Light & Magic, and many others. Python is used not only on the backend, but also on the front. In case you’re new to the computer science field, backend programming is the stuff that’s behind the scenes; things like database processing, document generation, etc. Frontend processing is the pretty stuff most users are familiar with, such as web pages or desktop user interfaces. For example, there are some really nice Python GUI toolkits such as wxPython, PySide, and Kivy. There are also several web frameworks like Django, Pyramid, and Flask. You might find it surprising to know that Django is used for Instagram and Pinterest. If you have used these or many other websites, then you have used something that’s powered by Python without even realizing it! As with most technical books, this one includes a few conventions that you need to be aware of. New topics and terminology will be in bold. You will also see some examples that look like the following: >>> myString = "Welcome to Python!" The >>> is a Python prompt symbol. You will see this in the Python interpreter and in IDLE. You will learn more about each of these in the first chapter. Other code examples will be shown in a similar manner, but without the >>>. You will need a working Python 3 installation. The examples should work in either Python 2.x or 3.x unless specifically marked otherwise. Most Linux and Mac machines come with Python already installed. However, if you happen to find yourself without Python, you can go download a copy from http://python.org/download/. There are up-to-date installation instructions on their website, so I won’t include any installation instructions in this book. Any additional requirements will be explained later on in the book. I welcome feedback about my writings. If you’d like to let me know what you thought of the book, you can send comments to the following address:
OPCFW_CODE
SHLOMIF/Test-Run-CmdLine-0.0131 - 20 Jan 2016 11:36:40 GMT - Search in distribution SHLOMIF/Test-Run-0.0304 - 13 Dec 2015 11:12:24 GMT - Search in distribution SHLOMIF/Test-Run-Plugin-ColorSummary-0.0202 - 13 Dec 2015 11:40:37 GMT - Search in distribution - Test::Run::Plugin::ColorSummary - A Test::Run plugin that colors the summary. This is a Test::Run::CmdLine plugin that terminates the test suite after the first failing test script. This way, you can know more quickly in case something went wrong. To enable, add "BreakOnFailure" to the "HARNESS_PLUGINS" environment variable an...SHLOMIF/Test-Run-Plugin-BreakOnFailure-v0.0.5 - 31 May 2015 15:37:34 GMT - Search in distribution - Test::Run::Plugin::BreakOnFailure - stop processing the entire test suite upon the first failure. SHLOMIF/Test-Run-Plugin-ColorFileVerdicts-0.0124 - 02 Jun 2015 17:38:51 GMT - Search in distribution This is a Test::Run::CmdLine plugin that allows enabling alternate interpreters. One can specify them by setting the 'HARNESS_ALT_INTRP_FILE' environment variable to the path to a YAML configuration file which lists the interpreters and their regular...SHLOMIF/Test-Run-Plugin-AlternateInterpreters-0.0124 - 31 May 2015 14:56:29 GMT - Search in distribution - Test::Run::Plugin::AlternateInterpreters - Define different interpreters for different test scripts with Test::Run. - Test::Run::Plugin::AlternateInterpreters::Straps::AltIntrPlugin - a plugin for Test::Run::Straps to handle the alternative interpreters. This is a Test::Run::CmdLine plugin that allows one to trim the filenames that are displayed by the harness. It accepts the parameter by using the 'HARNESS_TRIM_FNS' environment variable. A few sample ones are: fromre:\At\z (to match everything up to...SHLOMIF/Test-Run-Plugin-TrimDisplayedFilenames-0.0125 - 19 Jun 2015 07:14:52 GMT - Search in distribution - Test::Run::Plugin::TrimDisplayedFilenames - trim the first components of the displayed filename to deal with excessively long ones. A Riap client in the form of a simple interactive command-line shell (as opposed to Perinci::Access which is a Perl library, or peri-run and peri-access which are non-interactive command-line interface). Provides a convenient way to explore API servi...PERLANCAR/App-riap-0.37 - 10 Jul 2017 11:58:56 GMT - Search in distribution depak*) is a CLI application to pack your dependencies (required pure-Perl modules) along with your Perl script into a single file. It will trace what modules your script requires using one of several available methods, and include them inside the sc...PERLANCAR/App-depak-0.57 - 14 Jul 2017 13:29:36 GMT - Search in distribution This script runs a dux function on the command line. Dux function receives items as lines from files/stdin, and outputs items as lines of stdout....PERLANCAR/App-dux-1.53 - 10 Jul 2017 11:26:45 GMT - Search in distribution - Perinci::CmdLine::dux - Perinci::CmdLine subclass for dux cli An Argv object treats a command line as 3 separate entities: the *program*, the *options*, and the *args*. The *options* may be further subdivided into user-defined *option sets* by use of the "optset" method. When one of the *execution methods* is c...DSB/Argv-1.28 - 13 May 2013 15:01:11 GMT - Search in distribution PERLANCAR/Rinci-1.1.86 - 09 Dec 2017 11:32:09 GMT - Search in distribution - Rinci::function - Metadata for your functions/methods pb helps you build various packages directly from your project sources. In order to work correctly, it relies on a certain number of configuration files. Most of these configuration parameters can be setup in all the configuration files, however, the...BCO/ProjectBuilder-0.14.1 - 28 Sep 2016 00:03:18 GMT - Search in distribution IO::Prompter exports a single subroutine, "prompt", that prints a prompt (but only if the program's selected input and output streams are connected to a terminal), then reads some input, then chomps it, and finally returns an object representing that...DCONWAY/IO-Prompter-0.004014 - 23 Nov 2015 21:50:55 GMT - Search in distribution This module adds a small number of new regex constructs that can be used within Perl 5.10 patterns to implement complete recursive-descent parsing. Perl 5.10 already supports recursive=descent *matching*, via the new "(?<name>...)" and "(?&name)" con...DCONWAY/Regexp-Grammars-1.048 - 26 Sep 2017 20:21:35 GMT - Search in distribution Overview Getopt::Declare is *yet another* command-line argument parser, one which is specifically designed to be powerful but exceptionally easy to use. To parse the command-line in @ARGV, one simply creates a Getopt::Declare object, by passing "Geto...FANGLY/Getopt-Declare-1.14 - 09 Mar 2011 07:49:10 GMT - Search in distribution Some shells, like bash/fish/zsh/tcsh, supports tab completion for programs. They are usually activated by issuing one or more "complete" (zsh uses "compctl") internal shell commands. The completion scripts which contain these commands are usually put...PERLANCAR/App-shcompgen-0.321 - 08 Feb 2018 15:02:25 GMT - Search in distribution - App::shcompgen - Generate shell completion scripts The WebFetch module is a framework for downloading and saving information from the web, and for saving or re-displaying it. It provides a generalized interface for saving to a file while keeping the previous version as a backup. This is mainly intend...IKLUFT/WebFetch-0.13 - 21 Sep 2009 05:02:33 GMT - Search in distribution This utility will run your script (finding it in "PATH" if not found in current directory) while setting "COMP_LINE" and "COMP_POINT" to test how your script will perform shell completion. In addition to that, it will also load Log::ger::Output::Scre...PERLANCAR/App-CompleteUtils-0.16 - 08 Feb 2018 14:46:57 GMT - Search in distribution Bencher is a benchmark framework. The main feature of Bencher is permuting list of Perl codes with list of arguments into benchmark items, and then benchmark them. You can run only some of the items as well as filter codes and arguments to use. You c...PERLANCAR/Bencher-1.041 - 03 Apr 2018 06:58:12 GMT - Search in distribution
OPCFW_CODE
Assuming you've already checked the likely suspects, here are some random thoughts on jitter troubleshooting. (FWIW, many of these will break other things and are not suggested as a fix, just a troubleshooting aid ) A) Try to distinguish whether the DCM input clock is affected when the I/O switches; or, if the DCM itself is being affected; or, if both are - clock your DDR clock forwarding flop directly from the input clock, with no DCM: does it still get the jitters when the QDR I/O switching starts? i.e. 100 Mhz input clock -> BUFG -> DDR output ( IIRC, you don't need to fiddle with DIFF_OUT buffers for global clock forwarding in V4 due to the already differential global clock distribution ) - if you have another clock input ( esp. in a quiet bank ), temporarily clock the QDR logic from that ( with and without DCM ) and see if the jitter changes B) DCM Duct Tape - LOC the DCM to the other DCM sites on the chip; see if that affects the jitter Even if it's not an optimum LOC for the DCM because of the GCLK pin location, and there needs to be a long clock route to get there, putting the DCM on the other side of the chip away from I/O activity may help your jitter ( but not meet system timing ) - change FACTORY_JF as described in Answer Record 13756 If decide to try CLKFX, see AR 21594 and AR 18181 ( V2/S3 era advice, not sure how it applies to V4 ) - change DCM DESKEW_ADJUST to SOURCE_SYNCHRONOUS to turn off the internal DCM feedback delay element (more V2 era advice) ( see pages 4-5 of XAPP259 ) - Do you have any spare LVDS input/outputs elsewhere on the chip ? ( handy for clock troubleshooting ) - If you run a 'hammer' test 0000 FFFF instead of pseudorandom patterns on the QDR address/data lines, does the jitter get much worse and/or the DCM unlock ? ( also try changing the toggle rate, 1,2..N clocks ) FWIW, my S3 Starter Kit SRAM memory test that used a x2 DCM would unlock on hammer patterns, even with slow slew I/O meeting SSO limits, unless the DCM was LOC'd to the other side of the chip away from the SRAM I/O. - Is the QDR interface bandwidth sufficient to allow for Asteroids vector generator emulation at 1080p resolution?
OPCFW_CODE
Brain Training with Chinese Characters You can train your brain by trying to write chinese characters as answers to questions as fast as possible. The key is to use characters that you already know. The questions can be the English definition for the character or its Chinese pronunciation (in pinyin or Zhuyin or Gwoyeu Romatzyh... depending on which one you prefer to use.) As an example, you might have these three words: I, you, her. You have to write the chinese which would be: 我, 你,她. (As a side note, you could do this writing by hand which is what I first envisioned. However, if you are learning a Chinese Character touch typing technique like Cangjie, then you could use this "method" to practice your chinese typing skills.) Before I go on about writing Chinese Characters as quickly as possible, some background information is required. What is brain training exactly and how does it work? My first exposure to brain training was via "Train Your Brain: 60 Days to a Better Brain" by Ryuta Kawashima (aug 2005). In this book, the simple guidelines to training your brain were to practice answering math questions, simple ones, as fast as possible. Another way that was mentioned is to practice reading aloud. The book contains a series of math quizzes each containing 100 questions. The questions where simple add, subract, multiply or divide questions. The goal was to answer the questions as fast as possible. And to make it more interesting the author provided some time goals. For the Bronze level, the time you needed was under 2 minutes. For Silver, under 90 seconds. For Gold, under 1 minute. What I found after doing these brain training exercises is that I felt refreshed afterwards. I felt like I could think clearly. My whole body also felt slightly energized, like I'd done just done a really nice yoga class, but all I'd been doing is answering math questions as quickly as possible. I found that I most often felt this way the less I thought about the answers and the quicker I just let myself write the answers down. I didn't worry about getting the answers wrong or write. Instead I focused on not thinking. By the way, you did get penalized for wrong answers. However, whether I got the bronze level or not, when I tried to answer questions quickly and without thinking, that is when I felt the best afterwards. Simple Questions Or Questions We Already Know the Answers To Part of the reason is that the quizzes used simple questions. Or is it? Most of us have enough math experience that we know automatically the 2x3=6 or that 9/3=3. We've done simple math equations enough times that we don't have to figure the answers out. We've learned them or memorized them simply by repeating them so many times. And in my case, over and on top of regular school work, every time I came home from school I had to write out the multiplications tables by hand, sometimes 2 or 3 times. So I know they became "built in." Brain training in this case was learning to access answers that I already knew as quickly as possible. What we tend to think of as "simple" are things that we don't have to think about in order to know the answers. If you took the time to practice multiplication tables up to 25x25 (and not just square roots mind you!) you could use these types of questions to train your brain. Contains stroke order diagrams for all 3000 characters. Also shows four or more character combinations containing the character you are looking at. In some cases also has a Chinese phrase or idiom that contains the character in question. Going For Gold After practicing for a few weeks, I reached the silver level. For an experiment I tried writing out numbers as quickly as possible to see how fast I would have to write to get to the gold level. Basically I would have to write non-stop. There was no time for to look at the question or figure it out. I wondered how I'd be able to get to Gold level. It seemed impossible. As I said, the idea was not to think about the answers. The answers where already apart of me. What I did was to look at each question and as soon I started to write the answer I moved my eyes to the next question. Then, when I finished writing the question, my eyes moved from the question I was looking at to the next question. Meanwhile my hand wrote the answer to the question I'd just been looking at. Meanwhile, I didn't try to figure out the answer. It went from my brain to my hands without "me" having to think or intervene. This is when I truly started to feel alive and glowing. I think this skill is similar to learning to touch type. Once you've learned how to touch typeyou don't think about which keys to press. You let your fingers do it for you. This was more or less what I was doing by "looking ahead." I'd led my mind figure out or access the answer each time my eyes saw the question. Meanwhile my fingers and hand would write the answer out. Again, I should emphasize that I was not thinking. I didn't say to myself, hhmm, 2+2 is... oh yes 4. I simply wrote down the answer. And I didn't even look at my hand as I was writing. Instead, I wrote on auto pilot, or so it seemed. Did I get every question right? No I didn't. But getting the questions write was only an indicator of which questions and answers I hadn't made a part of my long term memory. The goal was to feel good. And as for the wrong answers, it was a simple enough process to practice the questions I got wrong so that in future I could get them right without having to think about the answer. So then, how does this apply to writing chinese characters? Top 10 Chinese Characters Cangjie Typing Code Makes previous character possesive or adjective buˋ, buˊ, fouˇ, fou yiˋ, yiˊ, yi One, a, indefinate article yes, is, am, are have, possess, own le, liaoˇ, liaoˋ particle that shows completion big, important, grown up country, state, nation Your Chinese Hurts My Ears Say you focus on learning characters in groups of ten. You focus on learning how to write the character. Then you learn it's meaning. And then you learn its pronunciation. Focusing only on the ten characters you've just learned, you could have a sheet with the randomized english meanings of those characters. Then, as you read each definition you write out the character without thinking. Meanwhile your eyes look to the next definition so that you are ready to write out the next character. You repeat the process for each successive question. If you are writing a character without looking it may end up looking not so pretty. For the purposes of brain training that's alright. Just so long as it is legible or recognizable. You could practice doing this for each group of ten characters. Then, when you've practiced and learned 50 characters, you can mix all 50 character definitions up and try to write the characters as quickly as possible. You could also test yourself and train your brain by using the pinyin pronunciation as the question. So long as you don't have two characters with the same pronunciation you are okay. (Or to be more specific your question sheet could contain the pinyin pronunciation and the English meaning of each character.) You'll quickly find out which characters that you don't know so well. And then in between tests you can practice writing these characters so that you do know them well. Training Your Brain while Learning Now a large part of learning to write characters and make them a part of you is first learning them and that can be a part of your brain training process also. Each time you learn a new chinese character you are downloading a new pattern into your brain. And if you are practicing writing characters, you are not only changing the memory part of your brain, you are also changing the motor control portion, the part that controls your hand. You can train your brain while learning to write characters by breaking down each character into easy to remember parts. You focus on one part at a time, working on five or so strokes, enough that you can practice flowing from stroke to stroke but not too many that you have to think in order to figure out the strokes. You can practice the same strokes over and over again until they become built in and then you can go on to the next portion of the character. Add together the pieces until finally you can do the whole character from memory without having to think. Then it's time for the next character. When doing this "breaking down" process, don't be afraid to break down characters in abnormal ways. Also, don't be afraid to practice the last few strokes first, and then the strokes before that. Like when writing out math equations as quickly as possible, when you find problem areas, stop and focus on the strokes that are giving your problems, then gradually add in the rest. And then take a rest when you've had enough. Taking a break is important to, since if you keep on pushing yourself you'll end up with a sore head instead of an energized and refreshed feeling. I've created some PDF's that contain the top 500 chinese characters organized by: - English definition, - Pinyin and They also contain Cangjie typing codes. You can check them out at http://chinesecharacterdictionary.zeroparallax.com/ Getting Into the Flow Ideally what "brain training" does, whether which characters or math questions is get you into the flow. The flow is a state of being where you don't think. Instead you can be watching yourself, or you can be focused on using your senses and responding to what you sense. In the case of answering math questions or writing chinese characters, the questions are what you sense, and the answers are your response. Because you aren't thinking, you are learning to access memory as quickly as possible. You are also allowing yourself to write (or do) without limiting or second guessing yourself. This state of being is very similar or even the same as the state that jazz musicians get into when they do improv. As you practice flowing your non-thinking mind begins to see connections that your thinking mind mind not notice. You enter a creative space where the limitations no longer apply. That isn't to say that limits are a bad thing. You need limits when learning. You limit yourself to 10 characters, or to certain brush strokes within those characters. Or you limit yourself to math questions that you have trouble with, until they no longer trouble you. Then you no longer need the limits. You practice being creatively free. And even though you free yourself of one set of limits you are still limited. It is just that what limits you is the idea of what you are doing whether it is answering math questions or painting chinese characters or playing jazz. It's just that now what you have is possibilities within a large set of limits, and those could very well be endless.
OPCFW_CODE
A step-by-step tutorial for 3 extensions in 1 project by Walter Xie Welcome to the step-by-step tutorial to implement a LPhy or LPhyBEAST extension. Before reading this tutorial, we suggest you learn some essential knowledges about Gradle, and know how IntelliJ IDEA integrates with Gradle. It is also helpful to understand the technial background, which is available in the system integration section. 1. Setup development environment The development requires Java 17, Gradle, and IntelliJ. If you are not familiar with this step, please follow the tutorial to setup development environment and load your Gradle project into IntelliJ. IntelliJ will automatically import modules and dependencies from the Gradle settings. Please do not try to change them through Project Structure in IntelliJ, because these changes might be lost after reimporting. 2. Establish a standard project structure The extension project must be set to the standard Gradle directory structure. You can either create a Gradle project using IntelliJ, or simply copy the structure from an existing example, such as Phylonco, and then fill in your contents. For example, in Figure 1, you need to rename the 3 subprojects (subfolders) to your subprojects, which are phylonco-beast, phylonco-lphy, and phylonco-lphybeast. Then replace their names inside include to yours, which are highlighted in the Please be aware that if you are migrating your existing projects, you need to use either IntelliJ or git mv to move files, otherwise the history of files will be lost. 3. Fill in your project metadata We are using composite builds. In the project root, there is a settings.gradle.kts to configure this structure, and a common build file build.gradle.kts to share build logics. Furthermore, each subproject has its own build file. They have been pointed by a red arrow in Figure 1. You need to replace the project metadata in these files to your project information. The main changes are listed below, and click links to see where they are: subprojects, please refer to section 2. group, version and webpage, also the overwritten version. manifest file in each jar, either the shared attribute or individual attributes, such as phylonco-beast build, phylonco-lphy build, and phylonco-lphybeast build. Maven publication metadata, if you will publish to the Maven central repo. The advanced tutorial Gradle project master - project structure will explain this in detail. 4. Dependency management In this step, you need to configure the dependencies block for your subprojects. First we recommend to use but this requires all libraries are published to the Maven central repo. If this is unavailable, then you can consider to use which stores the released libraries in a lib folder in the repository and load them as files. The significant drawbacks are that you have to manually validate their dependencies and update them. Sometimes, if a subproject depends on another in the same repository, you can import it using But we do not recommend this, if the subproject can be imported using module dependencies. There are several types of Each of them defines a specific scope for the dependencies declared in a Gradle project. Please be aware of the key difference and be respectful of consumers. The advanced tutorial Gradle project master - dependencies will introduce the details of these concepts. 5. Java development To create a LPhy extension, you need to create the container provider class under the package mypackage.lphy.spi to list all extended Java classes, and register it in the The following posts explain how the core and extensions work in a technical level. The advanced tutorial Java extension mechanism of LPhy and LPhyBEAST will demonstrate the usage of LPhy and LPhyBEAST extension mechanism. 6. Build the project The output of basic tasks will be kept in a build folder created by each build. For example, the libs contains all jar files, the distributions contains the zip or tar files, test-results contain unit test results.
OPCFW_CODE
Make a note of the 'public' deal with which you can use to receive funds. Adoption of Bitcoin is pretty much dependant on the convenience of its use. Therefore, ensure to position your mining hardware somewhere with good ventilation so the heat can dissipate simply. You'll be able to be taught extra about pooled mining on the Bitcoin invest 401k in bitcoin Wiki. Following a request from Satoshi, Julian Assange refrained from accepting Bitcoin until mid-approach by way of Grownup service suppliers whose livelihood relies on such promoting haven't any solution to pay for it apart from Bitcoin. To verify no one can potentially pilfer your Bitcoins, first test that your pool uses SSL. You'll see your hash price at the underside right and the present state of your where to invest bitcoin or ethereum work in the underside bar. If neither of these options appeals, you can rent hash power from cloud mining corporations. We'll stroll you thru the technique of signing up for Slush's Pool because it's one we have used too much, but the identical procedure can be used for any of the main pools. In our cryptocurrency investing premium service we explained in great detail how this formation tends bitcoin exchange venezuela to have a bullish consequence, supplied it respects this formation in fact. This web site uses Google Analytics, a web analytics service supplied by Google, Inc. ("Google"). Best way to start investing in bitcoin In embodiments, provided herein is a transaction-enabling system having a machine that robotically sells its compute storage capability on a forward market for storage capacity how to trade crypto on robinhood and having a distributed ledger that tokenizes a firmware program, such that operation on the distributed ledger supplies provable entry to the firmware program. In order to use this you'll need access to a mobile gadget and an app similar to Google Authenticator or FreeOTP. Don't let anyone see your personal keys as anyone with entry to your paper wallet can control your virtual money. We suggest grabbing EVGA's Precision X utility, although you will get fairly far utilizing the overclocking tools which are a part of AMD's Catalyst drivers. So, is there any option to promote Bitcoin for PayPal? The next stage is to sign as much as a pool; you can solo mine, but you want some serious hardware to make it worthwhile. Nearly all of OTC transactions involve giant sums of cash because OTC bitcoin brokers help rich traders keep away from “slippage”. If you are mining as an investment and don't plan on spending any of your coins quickly, think about using a website like Bitcoin Paper Wallet Generator to create a 'paper' wallet. While you'll be able to technically try to mine Bitcoin by yourself, it's very unlikely that your rig will singlehandedly resolve the advanced sums necessary to obtain a reward. Python bitcoin exchange You can too trade fiat forex for bitcoins at these online ATMs. 3. We’re in a free market the place anyone can get involved, that is not like more traditional markets. Normally, probably the most liquid contracts are the month-to-month expiries, a sample that holds equally as properly in crypto markets as in traditional markets, like equities, commodities, and foreign alternate. Within the article, the writer laid out a detailed case for generating yield on Bitcoin (BTC) holdings by investing in options markets as an alternative of decentralized finance (DeFi) apps. As soon as you've made the decision that mining is right for you, you may additionally need to arrange a Bitcoin wallet to store your profits. Bear this in thoughts if you plan to carry onto any BTC you mine moderately than promoting instantly. If an investor sells a name option with a strike value decrease than today’s value (or the worth the investor expects the asset to hold at day of expiry), they must be able to promote your asset at that decrease strike value. Bitcoin startup investment Bitcoin futures how to invest - how long does it take to trade bitcoin - buy crypto exchange software - can u invest in bitcoin - margin trading bitcoin exchanges - all exchange bitcoin price how to earn free bitcoin without investment http://onlinechristiancolleges.org/best-crypto-to-day-trade-today http://www.mastersincomputerscience.net/best-crypto-lending-platform.html crypto exchange hacks
OPCFW_CODE
When I try installing apps from Terminal (Discord in this case) using sudo pacman -S PACKAGENAME i get this error - error: could not open file /var/cache/pacman/pkg/discord-0.0.12-0-x86_64.pkg.tar.zst: Unrecognized archive format error: failed to commit transaction (cannot open package file) Errors occurred, no packages were upgraded. And when I try installing the package from Package Manager I get this error - Failed to commit transaction: cannot open package file Does anyone know what’s causing this error? Any help is appreciated Edit: The Problem was that i used an old ISO Can you return here pacman -V, install with iso version/number ? zst is a new format for pacman since 9 months where did you find that way about installing software via Manjaro’s terminal? Discord is in Manjaro’s repo, [tmo@msi ~]$ pacman -Ss discord All-in-one voice and text chat for gamers that's free and secure. To install a software package, the basic syntax is pacman -S packagename. However, installing a package without updating the system will lead to a partial upgrade situation so all the examples here will use pacman -Syu packagename which will install the package and ensure the system is up to date. [tmo@msi ~]$ sudo pacman -Syu discord pamac install discord See Manjaro wiki about If a software is not in the official repos you can install it from Arch User Repository - Manjaro The Manjaro wiki If you get for what ever reason an error with ZSTD not supported as archive format you can do this: sudo pacman -Syy sudo pacman -S pacman-static sudo pacman-static -Syyu I’m assuming you used some very old install .iso. Arch and Manjaro switched to zstd packaging format. With old, not updated system it cannot update itself because of the Unrecognized archive format. Hence, the need to install a pamac version that can go around it. pacman -S appname is the standard and official install command on pacman. There could be additional options for installation but the base is just -S . It was not clear enough as it was his first post what he was trying to do, for a moment I thought he was trying to install a locally downloaded package from somewhere bypassing the repo -which in his case should be installed with the -U switch-, then I saw the path was actually /var/cache/pacman/… But anyway these posts full of wiki links never hurt, on the contrary My guess is OP used an old ISO. Its probably best/easiest to simply suggest using a current image. What version of the Manjaro ISO are you using? Seems like i used an older version (17.1)
OPCFW_CODE
30th June 2004, 05:04 PM Please help a troubled soul with DHCP here... Recently I tried to use a wifi adaptor with my wifi router. I previously use network cable to connect to the wifi router with no problem at all. Using DHCP and auto get IP, auto detect network setting, etc. After having some problem with it, I tried unplug the wifi adaptor and use back the ntework cable. now the cable didnt work either! Symtom: the connection is enabled, the network status dialog show data packets sent out, but 0 packet is received. the details show that I am assigned a IP in different sub-group than the servers.. How come? The router is 192.168.1.1, but the IP assigned to my is someting like 169.2xx.xxx.xxx, I tried to ping the router but obviously cant reach. All my house mates can connect with no prob, cable or wireless. They are assigned the IP 192.168.XXX.XXX. I tried to use fix IP 192.168.xxx.xxx. Can connect, but internet still cannot browse. Changed back to DHCP, the IP again is something very off. The network card is built in one, which had been working prefectly with the cable until i tried the wifi. The wifi adaptor i tried is a dLink one. Things that I tried - reboot the PC(of course), re-set the router - reinstall network card driver - tried fixed IP, then revert back to DHCP - remove all network connection, reboot, re-add - remove all network hardware, reboot, re-add - tried the "Repair" button(Win XP only?) in the network status dialog, get "Cannot set new IP" message. Hope to get some help before I resort to reinstalling my PC.... :~( 30th June 2004, 05:06 PM I believe the cable itself is faulty not your settings. I get that kind of IP you mentioned when I tried to do a renew when there is no LAN connectivity. Try to do a ipconfig / release * then try to plug in the cable and reboot. Hope it works, if it doesn't might be possible your cable is lemon. 30th June 2004, 05:19 PM In DOS prompt do an ipconfig/release and then ipconfig/renew and see if this works. Of course using a new cable. 30th June 2004, 05:39 PM Thanks! Will try immediately when I reached home. Originally Posted by jbma
OPCFW_CODE